Hi All,
We've been testing a 24 drive NVME software RAID and getting far lower
write speeds than expected. The drives are connected with PLX chips
such that 12 drives are on 1 x16 connection and the other 12 drives use
another x16 link The system is a Supermicro 2029U-TN24R4T. The drives
Hello,
all users using a Software Raid 0 on SSD's with discard should disable
discard, if they use any recent kernel since mid-April 2015. The bug
was introduced by commit 47d68979cc968535cb87f3e5f2e6a3533ea48fbd and
the fix is not yet in Linus tree. The fix can be found here:
http
Hello,
all users using a Software Raid 0 on SSD's with discard should disable
discard, if they use any recent kernel since mid-April 2015. The bug
was introduced by commit 47d68979cc968535cb87f3e5f2e6a3533ea48fbd and
the fix is not yet in Linus tree. The fix can be found here:
http
Hello,
We have a troubling server fitted with 2 840Pro Samsung SSDs. Besides
other problems addressed also here a while ago (to which I have still
found no solution) we have one more anomaly (or so I believe).
Although both SSDs worked 100% of the time their wear is very different.
/dev/sda
Hello,
We have a troubling server fitted with 2 840Pro Samsung SSDs. Besides
other problems addressed also here a while ago (to which I have still
found no solution) we have one more anomaly (or so I believe).
Although both SSDs worked 100% of the time their wear is very different.
/dev/sda
t pcix sata controller,
> > > and a nvidia pci based video card.
> > >
> > > I have the os on a pata drive, and have made a software raid array
> > > consisting of 4 sata drives attached to the pcix sata controller.
> > > I created the array, and formatt
of ram, an intel stl-2 motherboard.
> It also has a promise 100 tx2 pata controller,
> a supermicro marvell based 8 port pcix sata controller,
> and a nvidia pci based video card.
>
> I have the os on a pata drive, and have made a software raid array
> consisting of 4 sata driv
On Sun, Dec 16, 2007 at 07:56:56PM +0800, Herbert Xu wrote:
>
> What's spooky is that I just did a google and we've had reports
> since 1998 all crashing on exactly the same line in tcp_recvmsg.
However, there's been no reports at all since 2000 apart from this
one so the earlier ones are
Andrew Morton <[EMAIL PROTECTED]> wrote:
>
>> Dec 7 17:20:53 sata_fileserver kernel: Code: 6c 39 df 74 59 8d b6 00
>> 00 00 00 85 db 74 4f 8b 55 cc 8d 43 20 8b 0a 3b 48 18 0f 88 f4 05 00
>> 00 89 ce 2b 70 18 8b 83 90 00 00 00 <0f> b6 50 0d 89 d0 83 e0 02 3c
>> 01 8b 43 50 83 d6 ff 39 c6 0f 82
so has a promise 100 tx2 pata controller,
> a supermicro marvell based 8 port pcix sata controller,
> and a nvidia pci based video card.
>
> I have the os on a pata drive, and have made a software raid array
> consisting of 4 sata drives attached to the pcix sata controlle
a dual processor pentium III 933 system.
It has 3gb of ram, an intel stl-2 motherboard.
It also has a promise 100 tx2 pata controller,
a supermicro marvell based 8 port pcix sata controller,
and a nvidia pci based video card.
I have the os on a pata drive, and have made a software
controller,
a supermicro marvell based 8 port pcix sata controller,
and a nvidia pci based video card.
I have the os on a pata drive, and have made a software raid array
consisting of 4 sata drives attached to the pcix sata controller.
I created the array, and formatted with reiserfs 3.6
I
Andrew Morton [EMAIL PROTECTED] wrote:
Dec 7 17:20:53 sata_fileserver kernel: Code: 6c 39 df 74 59 8d b6 00
00 00 00 85 db 74 4f 8b 55 cc 8d 43 20 8b 0a 3b 48 18 0f 88 f4 05 00
00 89 ce 2b 70 18 8b 83 90 00 00 00 0f b6 50 0d 89 d0 83 e0 02 3c
01 8b 43 50 83 d6 ff 39 c6 0f 82
This means
On Sun, Dec 16, 2007 at 07:56:56PM +0800, Herbert Xu wrote:
What's spooky is that I just did a google and we've had reports
since 1998 all crashing on exactly the same line in tcp_recvmsg.
However, there's been no reports at all since 2000 apart from this
one so the earlier ones are probably
motherboard.
It also has a promise 100 tx2 pata controller,
a supermicro marvell based 8 port pcix sata controller,
and a nvidia pci based video card.
I have the os on a pata drive, and have made a software raid array
consisting of 4 sata drives attached to the pcix sata controller.
I created
,
and a nvidia pci based video card.
I have the os on a pata drive, and have made a software raid array
consisting of 4 sata drives attached to the pcix sata controller.
I created the array, and formatted with reiserfs 3.6
I have run bonnie++ (filesystem benchmark) on the array without incident.
When I use
,
and a nvidia pci based video card.
I have the os on a pata drive, and have made a software raid array
consisting of 4 sata drives attached to the pcix sata controller.
I created the array, and formatted with reiserfs 3.6
I have run bonnie++ (filesystem benchmark) on the array without incident.
When I use
Michael J. Evans wrote:
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Mic
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans [EMAIL PROTECTED
Michael J. Evans wrote:
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J
Michael Evans wrote:
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> >> Michael Evans wrote:
> >>> On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > Oh, I see. I forgot about the changelogs.
Michael Evans wrote:
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> >> Michael Evans wrote:
> >>> Oh, I see. I forgot about the changelogs. I'd send out version 5
> >>> now, but I'm not sure what kernel version to make the patch
Michael Evans wrote:
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> Michael Evans wrote:
> > Oh, I see. I forgot about the changelogs. I'd send out version 5
> > now, but I'm not sure what kernel version to make the patch against.
> > 2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
> >
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
On Tuesday 28 August 2007, Jan Engelhardt wrote:
>
> On Aug 28 2007 06:08, Michael Evans wrote:
> >
> >Oh, I see. I forgot about the changelogs. I'd send out version 5
> >now, but I'm not sure what kernel version to make the patch against.
> >2.6.23-rc4 is on kernel.org and I don't see any git
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
Additionally I never could tell what git tree was the 'mainline' as it
isn't
On Aug 28 2007 06:08, Michael Evans wrote:
>
>Oh, I see. I forgot about the changelogs. I'd send out version 5
>now, but I'm not sure what kernel version to make the patch against.
>2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
2.6.23-rc4 is a snapshot in itself, a tagged one
On 8/27/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> Michael J. Evans wrote:
> > On Monday 27 August 2007, Randy Dunlap wrote:
> >> On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
> >>
> >>> =
> >>> ---
On 8/27/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael J. Evans wrote:
On Monday 27 August 2007, Randy Dunlap wrote:
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
=
--- linux/drivers/md/md.c.orig 2007-08-21
On Aug 28 2007 06:08, Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
2.6.23-rc4 is a snapshot in itself, a tagged one at
On Tuesday 28 August 2007, Jan Engelhardt wrote:
On Aug 28 2007 06:08, Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
Additionally I
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans [EMAIL PROTECTED
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
Additionally I never could tell what git tree was the 'mainline' as it
isn't
Michael Evans wrote:
On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
2.6.23-rc4 is on kernel.org and I don't see any git snapshots.
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael Evans wrote:
On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch against.
Michael Evans wrote:
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael Evans wrote:
On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now, but I'm not sure what kernel version to make the patch
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael Evans wrote:
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael Evans wrote:
On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send out version 5
now,
Michael Evans wrote:
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael Evans wrote:
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote:
Michael Evans wrote:
On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Michael Evans wrote:
Oh, I see. I forgot about the changelogs. I'd send
Michael J. Evans wrote:
On Monday 27 August 2007, Randy Dunlap wrote:
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
=
--- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700
+++ linux/drivers/md/md.c
On Monday 27 August 2007, Randy Dunlap wrote:
> On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
>
> > =
> > --- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700
> > +++ linux/drivers/md/md.c 2007-08-21
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
> Note: between 2.6.22 and 2.6.23-rc3-git5
> rdev = md_import_device(dev,0, 0);
> became
> rdev = md_import_device(dev,0, 90);
> So the patch has been edited to patch around that line. (might be fuzzy)
so
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
On 8/26/07, Kyle Moffett <[EMAIL PROTECTED]> wrote:
> On Aug 26, 2007, at 08:20:45, Michael Evans wrote:
> > Also, I forgot to mention, the reason I added the counters was
> > mostly for debugging. However they're also as useful in the same
> > way that listing the partitions when a new disk is
On 8/26/07, Kyle Moffett [EMAIL PROTECTED] wrote:
On Aug 26, 2007, at 08:20:45, Michael Evans wrote:
Also, I forgot to mention, the reason I added the counters was
mostly for debugging. However they're also as useful in the same
way that listing the partitions when a new disk is added can
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans [EMAIL PROTECTED
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
Note: between 2.6.22 and 2.6.23-rc3-git5
rdev = md_import_device(dev,0, 0);
became
rdev = md_import_device(dev,0, 90);
So the patch has been edited to patch around that line. (might be fuzzy)
so you
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans [EMAIL PROTECTED
On Monday 27 August 2007, Randy Dunlap wrote:
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
=
--- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700
+++ linux/drivers/md/md.c 2007-08-21
Michael J. Evans wrote:
On Monday 27 August 2007, Randy Dunlap wrote:
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote:
=
--- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700
+++ linux/drivers/md/md.c
On Aug 26, 2007, at 08:20:45, Michael Evans wrote:
Also, I forgot to mention, the reason I added the counters was
mostly for debugging. However they're also as useful in the same
way that listing the partitions when a new disk is added can be (in
fact this augments that and the existing
On 8/26/07, Randy Dunlap <[EMAIL PROTECTED]> wrote:
> On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote:
>
> > From: Michael J. Evans <[EMAIL PROTECTED]>
> >
>
> Is there any way to tell the user what device (or partition?) is
> bein skipped? This printk should just print (confirm) that
On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote:
> From: Michael J. Evans <[EMAIL PROTECTED]>
>
> In current release kernels the md module (Software RAID) uses a static array
> (dev_t[128]) to store partition/device info temporarily for autostart.
>
> This pa
On 8/26/07, Jan Engelhardt <[EMAIL PROTECTED]> wrote:
>
> On Aug 26 2007 04:51, Michael J. Evans wrote:
> > {
> >- if (dev_cnt >= 0 && dev_cnt < 127)
> >- detected_devices[dev_cnt++] = dev;
> >+ struct detected_devices_node *node_detected_dev;
> >+ node_detected_dev =
On Aug 26 2007 04:51, Michael J. Evans wrote:
> {
>- if (dev_cnt >= 0 && dev_cnt < 127)
>- detected_devices[dev_cnt++] = dev;
>+ struct detected_devices_node *node_detected_dev;
>+ node_detected_dev = kzalloc(sizeof(*node_detected_dev), GFP_KERNEL);\
What's the \ good
Also, I forgot to mention, the reason I added the counters was mostly
for debugging. However they're also as useful in the same way that
listing the partitions when a new disk is added can be (in fact this
augments that and the existing messages the autodetect routines
provide).
As for using
From: Michael J. Evans <[EMAIL PROTECTED]>
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans <[EMAIL
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list.
Signed-off-by: Michael J. Evans [EMAIL PROTECTED
Also, I forgot to mention, the reason I added the counters was mostly
for debugging. However they're also as useful in the same way that
listing the partitions when a new disk is added can be (in fact this
augments that and the existing messages the autodetect routines
provide).
As for using
On Aug 26 2007 04:51, Michael J. Evans wrote:
{
- if (dev_cnt = 0 dev_cnt 127)
- detected_devices[dev_cnt++] = dev;
+ struct detected_devices_node *node_detected_dev;
+ node_detected_dev = kzalloc(sizeof(*node_detected_dev), GFP_KERNEL);\
What's the \ good for,
On 8/26/07, Jan Engelhardt [EMAIL PROTECTED] wrote:
On Aug 26 2007 04:51, Michael J. Evans wrote:
{
- if (dev_cnt = 0 dev_cnt 127)
- detected_devices[dev_cnt++] = dev;
+ struct detected_devices_node *node_detected_dev;
+ node_detected_dev =
On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote:
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array
On 8/26/07, Randy Dunlap [EMAIL PROTECTED] wrote:
On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote:
From: Michael J. Evans [EMAIL PROTECTED]
Is there any way to tell the user what device (or partition?) is
bein skipped? This printk should just print (confirm) that
On Aug 26, 2007, at 08:20:45, Michael Evans wrote:
Also, I forgot to mention, the reason I added the counters was
mostly for debugging. However they're also as useful in the same
way that listing the partitions when a new disk is added can be (in
fact this augments that and the existing
TED]> wrote:
> On Wednesday August 22, [EMAIL PROTECTED] wrote:
> > From: Michael J. Evans <[EMAIL PROTECTED]>
> >
> > In current release kernels the md module (Software RAID) uses a static array
> > (dev_t[128]) to store partition/device info temporarily for
On Wednesday August 22, [EMAIL PROTECTED] wrote:
> From: Michael J. Evans <[EMAIL PROTECTED]>
>
> In current release kernels the md module (Software RAID) uses a static array
> (dev_t[128]) to store partition/device info temporarily for autostart.
>
> This patch re
On Wednesday August 22, [EMAIL PROTECTED] wrote:
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array with a list
:
On Wednesday August 22, [EMAIL PROTECTED] wrote:
From: Michael J. Evans [EMAIL PROTECTED]
In current release kernels the md module (Software RAID) uses a static array
(dev_t[128]) to store partition/device info temporarily for autostart.
This patch replaces that static array
Add file pattern to MAINTAINER entry
Signed-off-by: Joe Perches <[EMAIL PROTECTED]>
diff --git a/MAINTAINERS b/MAINTAINERS
index d17ae3d..29a2179 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4205,6 +4205,8 @@ P:Neil Brown
M: [EMAIL PROTECTED]
L: [EMAIL PROTECTED]
S:
Add file pattern to MAINTAINER entry
Signed-off-by: Joe Perches [EMAIL PROTECTED]
diff --git a/MAINTAINERS b/MAINTAINERS
index d17ae3d..29a2179 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4205,6 +4205,8 @@ P:Neil Brown
M: [EMAIL PROTECTED]
L: [EMAIL PROTECTED]
S:
On Mon, Jul 30, 2007 at 09:39:39PM +0200, Miklos Szeredi wrote:
> > Extrapolating these %cpu number makes ZFS the fastest.
> >
> > Are you sure these numbers are correct?
>
> Note, that %cpu numbers for fuse filesystems are inherently skewed,
> because the CPU usage of the filesystem process
On Mon, 30 Jul 2007, Miklos Szeredi wrote:
Extrapolating these %cpu number makes ZFS the fastest.
Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is not taken
into account.
On Mon, 2007-07-30 at 10:29 -0400, Justin Piszcz wrote:
> Overall JFS seems the fastest but reviewing the mailing list for JFS it
> seems like there a lot of problems, especially when people who use JFS > 1
> year, their speed goes to 5 MiB/s over time and the defragfs tool has been
>
> Extrapolating these %cpu number makes ZFS the fastest.
>
> Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is not taken
into account.
So the numbers are not all that good, but
Justin Piszcz wrote:
> CONFIG:
>
> Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
> Kernel was 2.6.21 or 2.6.22, did these awhile ago.
> Hardware was SATA with PCI-e only, nothing on the PCI bus.
>
> ZFS was userspace+fuse of course.
Wow
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Hardware was SATA with PCI-e only, nothing on the PCI bus.
ZFS was userspace+fuse of course.
Reiser was V3.
EXT4 was created using the recommended options on its
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Hardware was SATA with PCI-e only, nothing on the PCI bus.
ZFS was userspace+fuse of course.
Reiser was V3.
EXT4 was created using the recommended options on its
Justin Piszcz wrote:
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Hardware was SATA with PCI-e only, nothing on the PCI bus.
ZFS was userspace+fuse of course.
Wow! Userspace and still that efficient
Extrapolating these %cpu number makes ZFS the fastest.
Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is not taken
into account.
So the numbers are not all that good, but
On Mon, 2007-07-30 at 10:29 -0400, Justin Piszcz wrote:
Overall JFS seems the fastest but reviewing the mailing list for JFS it
seems like there a lot of problems, especially when people who use JFS 1
year, their speed goes to 5 MiB/s over time and the defragfs tool has been
removed(?)
On Mon, 30 Jul 2007, Miklos Szeredi wrote:
Extrapolating these %cpu number makes ZFS the fastest.
Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is not taken
into account.
On Mon, Jul 30, 2007 at 09:39:39PM +0200, Miklos Szeredi wrote:
Extrapolating these %cpu number makes ZFS the fastest.
Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is
On Fri, 20 Jul 2007, Lennart Sorensen wrote:
On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote:
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
x86_64, when I ran md5sum -c MD5SUMS, I
On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote:
> I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
>
> I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
> x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the
>
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the
second one I see upwards of what I should be seeing 500-520MB/s.
NOTE::
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the
second one I see upwards of what I should be seeing 500-520MB/s.
NOTE::
On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote:
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the
second
On Fri, 20 Jul 2007, Lennart Sorensen wrote:
On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote:
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
x86_64, when I ran md5sum -c MD5SUMS, I
Jan Engelhardt wrote:
I am not sure (would have to check again), but I believe both opensuse and
fedora (the latter of which uses LVM for all partitions by default) have
that working, while still using GRUB.
Keyword: partitions. I.e., they partition the hard drive (so that the first
31
On Jun 16 2007 11:38, Alexander E. Patrakov wrote:
> Jan Engelhardt wrote:
>> On Jun 15 2007 16:03, Christian Schmidt wrote:
>
>> > Thanks for the clarification. I didn't use LVM on the device on purpose,
>> > as root on LVM requires initrd (which I strongly dislike as
>> >
On Jun 16 2007 11:38, Alexander E. Patrakov wrote:
Jan Engelhardt wrote:
On Jun 15 2007 16:03, Christian Schmidt wrote:
Thanks for the clarification. I didn't use LVM on the device on purpose,
as root on LVM requires initrd (which I strongly dislike as
yet-another-point-of-failure). As
Jan Engelhardt wrote:
I am not sure (would have to check again), but I believe both opensuse and
fedora (the latter of which uses LVM for all partitions by default) have
that working, while still using GRUB.
Keyword: partitions. I.e., they partition the hard drive (so that the first
31
Jan Engelhardt wrote:
On Jun 15 2007 16:03, Christian Schmidt wrote:
Thanks for the clarification. I didn't use LVM on the device on purpose,
as root on LVM requires initrd (which I strongly dislike as
yet-another-point-of-failure). As LVM is on the large partition anyway
I'll just add the
On Jun 15 2007 16:03, Christian Schmidt wrote:
>Hi Andi,
>
>Andi Kleen wrote:
>> Christian Schmidt <[EMAIL PROTECTED]> writes:
>>> Where is the inherent limit? The partitioning software, or partitioning
>>> all by itself?
>>
>> DOS style partitioning don't support more than 2TB. You either need
Hi Andi,
Andi Kleen wrote:
> Christian Schmidt <[EMAIL PROTECTED]> writes:
>> Where is the inherent limit? The partitioning software, or partitioning
>> all by itself?
>
> DOS style partitioning don't support more than 2TB. You either need
> to use EFI partitions (e.g. using parted) or LVM.
1 - 100 of 247 matches
Mail list logo