On Fri, 5 Oct 2007, Dean S. Messing wrote:
Brendan Conoboy wrote:
snip
Is the onboard SATA controller real SATA or just an ATA-SATA
converter? If the latter, you're going to have trouble getting faster
performance than any one disk can give you at a time. The output of
'lspci' should tell
On Wed, 3 Oct 2007, Andrew Clayton wrote:
On Wed, 3 Oct 2007 12:48:27 -0400 (EDT), Justin Piszcz wrote:
Also if it is software raid, when you make the XFS filesyste, on it,
it sets up a proper (and tuned) sunit/swidth, so why would you want
to change that?
Oh I didn't, the sunit
On Sat, 6 Oct 2007, Justin Piszcz wrote:
On Wed, 3 Oct 2007, Andrew Clayton wrote:
On Wed, 3 Oct 2007 12:48:27 -0400 (EDT), Justin Piszcz wrote:
Also if it is software raid, when you make the XFS filesyste, on it,
it sets up a proper (and tuned) sunit/swidth, so why would you want
On Sat, 6 Oct 2007, Dan Williams wrote:
Neil,
Here is the latest spin of the 'stripe_queue' implementation. Thanks to
raid6+bitmap testing done by Mr. James W. Laferriere there have been
several cleanups and fixes since the last release. Also, the changes
are now spread over 4 patches to
On Thu, 4 Oct 2007, Andrew Clayton wrote:
On Thu, 04 Oct 2007 12:46:05 -0400, Steve Cousins wrote:
Andrew Clayton wrote:
On Thu, 4 Oct 2007 10:39:09 -0400 (EDT), Justin Piszcz wrote:
What type (make/model) of the drives?
The drives are 250GB Hitachi Deskstar 7K250 series ATA-6
On Fri, 5 Oct 2007, Andrew Clayton wrote:
On Fri, 5 Oct 2007 06:25:20 -0400 (EDT), Justin Piszcz wrote:
So you have 3 SATA 1 disks:
Yeah, 3 of them in the array, there is a fourth standalone disk which
contains the root fs from which the system boots..
http://digital-domain.net/kernel
On Fri, 5 Oct 2007, Andrew Clayton wrote:
On Fri, 5 Oct 2007 07:08:51 -0400 (EDT), Justin Piszcz wrote:
The mount options are from when the filesystem was made for
sunit/swidth I believe.
-N Causes the file system parameters to be printed
out without really creating
On Fri, 5 Oct 2007, Andrew Clayton wrote:
On Fri, 5 Oct 2007 10:07:47 -0400 (EDT), Justin Piszcz wrote:
Yikes, yeah I would get them off the PCI card, what kind of
motherboard is it? If you don't have a PCI-e based board it probably
won't help THAT much but it still should be better than
On Sat, 6 Oct 2007, Richard Scobie wrote:
Have you had a look at the smartctl -a outputs of all the drives?
Possibly one drive is being slow to respond due to seek errors etc. but I
would perhaps expect to be seeing this in the log.
If you have a full backup and a spare drive, I would
On Fri, 5 Oct 2007, Shane wrote:
Hello all,
I have a raid5 softraid array using 6x320GB SATA drives. I
would like to reconfigure it to be 3x1tb SATA. Is there a
way to do this using the grow feature of mdadm. IE by
swapping 3 of the 320GB drives out for the 3 1TB drives
allowing the
On Thu, 4 Oct 2007, Justin Piszcz wrote:
Is NCQ enabled on the drives?
On Thu, 4 Oct 2007, Andrew Clayton wrote:
On Wed, 3 Oct 2007 13:36:39 -0700, David Rees wrote:
Not bad, but not that good, either. Try running xfs_fsr into a nightly
cronjob. By default, it will defrag mounted xfs
On Thu, 4 Oct 2007, Andrew Clayton wrote:
On Thu, 4 Oct 2007 10:09:22 -0400 (EDT), Justin Piszcz wrote:
Is NCQ enabled on the drives?
I don't think the drives are capable of that. I don't seen any mention
of NCQ in dmesg.
Andrew
What type (make/model) of the drives?
True
On Thu, 4 Oct 2007, Andrew Clayton wrote:
On Thu, 4 Oct 2007 10:09:22 -0400 (EDT), Justin Piszcz wrote:
Is NCQ enabled on the drives?
I don't think the drives are capable of that. I don't seen any mention
of NCQ in dmesg.
Andrew
BTW You may not see 'NCQ' in the kernel messages unless
On Thu, 4 Oct 2007, Andrew Clayton wrote:
On Thu, 4 Oct 2007 10:39:09 -0400 (EDT), Justin Piszcz wrote:
What type (make/model) of the drives?
The drives are 250GB Hitachi Deskstar 7K250 series ATA-6 UDMA/100
True, the controller may not be able to do it either.
What types of disks
On Thu, 4 Oct 2007, Andrew Clayton wrote:
On Thu, 4 Oct 2007 10:10:02 -0400 (EDT), Justin Piszcz wrote:
Also, did performance just go to crap one day or was it gradual?
IIRC I just noticed one day that firefox and vim was stalling. That was
back in February/March I think. At the time
Have you checked fragmentation?
xfs_db -c frag -f /dev/md3
What does this report?
Justin.
On Wed, 3 Oct 2007, Andrew Clayton wrote:
Hi,
Hardware:
Dual Opteron 2GHz cpus. 2GB RAM. 4 x 250GB SATA hard drives. 1 (root file
system) is connected to the onboard Silicon Image 3114 controller.
Also if it is software raid, when you make the XFS filesyste, on it, it
sets up a proper (and tuned) sunit/swidth, so why would you want to change
that?
Justin.
On Wed, 3 Oct 2007, Justin Piszcz wrote:
Have you checked fragmentation?
xfs_db -c frag -f /dev/md3
What does this report
On Wed, 3 Oct 2007, Andrew Clayton wrote:
On Wed, 3 Oct 2007 12:48:27 -0400 (EDT), Justin Piszcz wrote:
Also if it is software raid, when you make the XFS filesyste, on it,
it sets up a proper (and tuned) sunit/swidth, so why would you want
to change that?
Oh I didn't, the sunit
What does cat /sys/block/md0/md/mismatch_cnt say?
That fragmentation looks normal/fine.
Justin.
On Wed, 3 Oct 2007, Andrew Clayton wrote:
On Wed, 3 Oct 2007 12:43:24 -0400 (EDT), Justin Piszcz wrote:
Have you checked fragmentation?
You know, that never even occurred to me. I've gotten
On Tue, 2 Oct 2007, Rustedt, Florian wrote:
Hello list,
some folks reported severe filesystem-crashes with ext3 and reiserfs on
mdraid level 1 and 5.
Is this safe now? Or should i only use non-journalling-filesystems on
software-raid-devices?
Kind regards, Florian Rustedt
On Tue, 2 Oct 2007, Goswin von Brederlow wrote:
Hi,
we (Q-Leap networks) are in the process of setting up a high speed
storage cluster and we are having some problems getting proper
performance.
Our test system consists of a 2x dual core system with 2 dual channel
UW scsi controlers
On Mon, 1 Oct 2007, Dale Dunlea wrote:
Hi,
I have a board with an AMCC440 processor, running RAID5 using the
async-tx interface. In general, it works well, but I have found a test
case that consistently causes a hard lockup of the entire system.
What makes this case odd is that I have only
So you got 2x with those optimizations I mentioned? Nice, did you
previously get that speed, or?
On Mon, 1 Oct 2007, Mr. James W. Laferriere wrote:
Hello Justin , Three seperate single runs of bonnie(*) .
Please note , the linux-2.6.23-rc6 , Concerns your email of
On Mon, 1 Oct 2007, Daniel Santos wrote:
It stopped the reconstruction process and the output of /proc/mdstat was :
oraculo:/home/dlsa# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1] [raid0] [linear]
md0 : active raid5 sdc1[3](S) sdb1[4](F) sdd1[0]
781417472 blocks
Kernel: 2.6.23-rc8 (older kernels do this as well)
When running the following command:
/usr/bin/time /usr/sbin/bonnie++ -d /x/test -s 16384 -m p34 -n 16:10:16:64
It hangs unless I increase various parameters md/raid such as the
stripe_cache_size etc..
# ps auxww | grep D
USER PID
On Wed, 26 Sep 2007, Ralf Gross wrote:
Justin Piszcz schrieb:
What was the command line you used for that output?
tiobench.. ?
tiobench --numruns 3 --threads 1 --threads 2 --block 4096 --size 2
--size 2 because the server has 16 GB RAM.
Ralf
Here is my output on my SW RAID5
.blogspot.com
Sent via BlackBerry from T-Mobile
-Original Message-
From: Justin Piszcz [EMAIL PROTECTED]
Date: Wed, 26 Sep 2007 05:52:39
To:Ralf Gross [EMAIL PROTECTED]
Cc:[EMAIL PROTECTED], linux-raid@vger.kernel.org
Subject: Re: mkfs options for a 16x hw raid5 and xfs (mostly large files
On Thu, 27 Sep 2007, Richard Scobie wrote:
Justin Piszcz wrote:
For raptors, they are inheriently known for their poor speed when NCQ is
enabled, I see 20-30MiB/s better performance with NCQ off.
Hi Justin,
Have you tested this for multiple reader/writers?
Regards,
Richard
On Thu, 27 Sep 2007, Richard Scobie wrote:
Justin Piszcz wrote:
If you have a good repeatable benchmark you want me to run with it on/off
let me know, no I only used bonnie++/iozone/tiobench/dd but not any
parallelism with those utilities.
Perhaps iozone with 5 threads, NCQ on and off
get good speeds, but for writes-- probably not.
Then re-benchmark.
Justin.
On Tue, 18 Sep 2007, Dean S. Messing wrote:
Justin Piszcz wrote:
On Tue, 18 Sep 2007, Dean S. Messing wrote:
:
:
:
: I'm not getting nearly the read speed I expected
: from a newly defined software RAID 5 array
On Wed, 19 Sep 2007, Dean S. Messing wrote:
Justin Piszcz wrote:
: One of the 5-10 tuning settings:
:
: blockdev --getra /dev/md0
:
: Try setting it to 4096,8192,16384,32768,65536
:
: blockdev --setra 4096 /dev/md0
:
:
I discovered your January correspondence to the list about this. Yes
On Tue, 18 Sep 2007, Dean S. Messing wrote:
I'm not getting nearly the read speed I expected
from a newly defined software RAID 5 array
across three disk partitions (on the 3 drives,
of course!).
Would someone kindly point me straight?
After defining the RAID 5 I did `hdparm -t /dev/md0'
On Mon, 3 Sep 2007, Xavier Bestel wrote:
Hi,
I have a server running with RAID5 disks, under debian/stable, kernel
2.6.18-5-686. Yesterday the RAID resync'd for no apparent reason,
without even mdamd sending a mail to warn about that:
This is normal, you probably are running Debian(?) or a
On Tue, 28 Aug 2007, T. Eichstädt wrote:
Hello all,
thanks for your responses.
Quoting Bill Davidsen [EMAIL PROTECTED]:
Neil Brown wrote:
On Monday August 27, [EMAIL PROTECTED] wrote:
I have a few people who asked me this as well, RAID10 or similiar (SW).
I am not so sure, with RAID1
On Mon, 27 Aug 2007, T. Eichstädt wrote:
Hallo all,
I have 4 HDDs and I want to use mirroring and striping.
I am wondering what difference between the following two solutions is:
- raid0 on top of 2 raid1 devices (raid1+0)
- directly using the raid10 module
Perhaps someone can give me a
On Sun, 26 Aug 2007, Abe Skolnik wrote:
Dear Mr./Dr./Prof. Brown et al,
I recently had the unpleasant experience of creating an MD array for
the purpose of booting off it and then not being able to do so. Since
I had already made changes to the array's contents relative to that
which I
On Fri, 24 Aug 2007, Tomasz Chmielewski wrote:
I built RAID-5 on a Debian Etch machine running 2.6.22.5 with this command:
mdadm --create /dev/md0 --chunk=64 --level=raid5 --raid-devices=5 /dev/sda1
/dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
After some time, it was synchronized just fine.
On Fri, 24 Aug 2007, Tomasz Chmielewski wrote:
Tomasz Chmielewski schrieb:
(...)
Perhaps, the bitmap is needed then? I guess by default, no internal bitmap
is added?
# mdadm -X /dev/md0
Filename : /dev/md0
Magic :
mdadm: invalid bitmap magic 0x0, the bitmap
On Fri, 24 Aug 2007, Tomasz Chmielewski wrote:
Justin Piszcz schrieb:
On Fri, 24 Aug 2007, Tomasz Chmielewski wrote:
I built RAID-5 on a Debian Etch machine running 2.6.22.5 with this
command:
mdadm --create /dev/md0 --chunk=64 --level=raid5 --raid-devices=5
/dev/sda1 /dev/sdb1 /dev
On Mon, 20 Aug 2007, Dat Chu wrote:
I am trying to find the mdadm version that support Linear RAID hot
grow. Does anyone have a link to point me to? I am currently running
2.6.2.
With warm regards,
Dat Chu
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a
Datum: Thu, 2 Aug 2007 10:33:21 -0400 (EDT)
Von: Justin Piszcz [EMAIL PROTECTED]
An: [EMAIL PROTECTED]
CC: linux-raid@vger.kernel.org
Betreff: Re: checkarray script
# dpkg -L mdadm|grep check
/etc/logcheck
/etc/logcheck/ignore.d.server
/etc/logcheck/ignore.d.server/mdadm
/etc/logcheck/violations.d
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Hardware was SATA with PCI-e only, nothing on the PCI bus.
ZFS was userspace+fuse of course.
Reiser was V3.
EXT4 was created using the recommended options on its
On Mon, 30 Jul 2007, Miklos Szeredi wrote:
Extrapolating these %cpu number makes ZFS the fastest.
Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is not taken
into account.
On Mon, 30 Jul 2007, Dan Williams wrote:
[trimmed all but linux-raid from the cc]
On 7/30/07, Justin Piszcz [EMAIL PROTECTED] wrote:
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Can you give 2.6.22.1
Quick question-- under Kernel 2.4 without 2TB support enabled, the only
other option is to use auto-carving to get the maximum amount of space
easily, however, after doing this (2TB, 2TB, 1.8TB) for a 10 x 750GB
array, only the first partition remains afer reboot.
Before reboot:
/dev/sdb1
On Sat, 21 Jul 2007, Justin Piszcz wrote:
Quick question-- under Kernel 2.4 without 2TB support enabled, the only other
option is to use auto-carving to get the maximum amount of space easily,
however, after doing this (2TB, 2TB, 1.8TB) for a 10 x 750GB array, only the
first partition
On Sat, 21 Jul 2007, Justin Piszcz wrote:
On Sat, 21 Jul 2007, Justin Piszcz wrote:
Quick question-- under Kernel 2.4 without 2TB support enabled, the only
other option is to use auto-carving to get the maximum amount of space
easily, however, after doing this (2TB, 2TB, 1.8TB) for a 10
On Fri, 20 Jul 2007, J. Hart wrote:
Justin Piszcz wrote:
Any reason you are using 2.6.19-rc5? Why not use 2.6.22.(1)?
I just wanted to try to understand the reason for the problem before changing
to a new kernel. I had not heard that any such problem had been encountered,
though I
On Thu, 19 Jul 2007, Lars Schimmer wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Justin Piszcz wrote:
On Wed, 18 Jul 2007, Rui Santos wrote:
Hi,
I'm getting a strange slow performance behavior on a recently installed
Server. Here are the details:
Server: Asus AS-TS500-E4A
Board
On Tue, 17 Jul 2007, dean gaudet wrote:
On Mon, 16 Jul 2007, David Greaves wrote:
Bryan Christ wrote:
I do have the type set to 0xfd. Others have said that auto-assemble only
works on RAID 0 and 1, but just as Justin mentioned, I too have another box
with RAID5 that gets auto assembled
I recently got a chance to test SW RAID5 using 750GB disks (10) in a RAID5
on a 3ware card, model no: 9550SXU-12
The bottom line is the controller is doing some weird caching with writes
on SW RAID5 which makes it not worth using.
Recall, with SW RAID5 using regular SATA cards with (mind
On Wed, 18 Jul 2007, Rui Santos wrote:
Hi,
I'm getting a strange slow performance behavior on a recently installed
Server. Here are the details:
Server: Asus AS-TS500-E4A
Board: Asus DSBV-D (
http://uk.asus.com/products.aspx?l1=9l2=39l3=299l4=0model=1210modelmenu=2
)
Hard Drives: 3x Seagate
On Wed, 18 Jul 2007, Gabor Gombas wrote:
On Wed, Jul 18, 2007 at 06:23:25AM -0400, Justin Piszcz wrote:
I recently got a chance to test SW RAID5 using 750GB disks (10) in a RAID5
on a 3ware card, model no: 9550SXU-12
The bottom line is the controller is doing some weird caching with writes
On Wed, 18 Jul 2007, Giuseppe Ghibò wrote:
Justin Piszcz ha scritto:
I recently got a chance to test SW RAID5 using 750GB disks (10) in a RAID5
on a 3ware card, model no: 9550SXU-12
The bottom line is the controller is doing some weird caching with writes
on SW RAID5 which makes
On Wed, 18 Jul 2007, Hannes Dorbath wrote:
On 18.07.2007 13:19, Justin Piszcz wrote:
For the HW RAID tests (2) at the bottom of the e-mail, no, I did not set
nr_requests or use the deadline scheduler.
For the SW RAID tests, I applied similar optimizations, I am probably not
at the latest
On Wed, 18 Jul 2007, Al Boldi wrote:
Justin Piszcz wrote:
UltraDense-AS-3ware-R5-9-disks,16G,50676,89,96019,34,46379,9,60267,99,5010
98,56,248.5,0,16:10:16/64,240,3,21959,84,1109,10,286,4,22923,91,544,6
UltraDense-AS-3ware-R5-9-disks,16G,49983,88,96902,37,47951,10,59002,99,529
On Wed, 18 Jul 2007, Sander wrote:
Justin Piszcz wrote (ao):
On Wed, 18 Jul 2007, Sander wrote:
Justin Piszcz wrote (ao):
Its too bad that there are no regular 4 port SATA PCI-e controllers
out there.
Is there a disadvantage to using a SaS controller from for example
lsi.com ?
http
On Wed, 18 Jul 2007, Hannes Dorbath wrote:
On 18.07.2007 12:23, Justin Piszcz wrote:
I am sure one of your questions is, well, why use SW RAID5 on the
controller? Because SW RAID5 is usually much faster than HW RAID5, at
least in my tests:
Though that's no answer to your question, I
On Mon, 16 Jul 2007, Greg Neumarke wrote:
Hi. I'm looking at setting up software RAID across 5 drives on an Intel
motherboard that has a ICH7R 4-port SATA controller and also an additional 4
SATA ports on a Marvell controller.
Is there anything I should be aware of when creating a software
On Fri, 13 Jul 2007, Andrew Klaassen wrote:
--- Justin Piszcz [EMAIL PROTECTED] wrote:
To give you an example I get 464MB/s write and
627MB/s with a 10 disk
raptor software raid5.
Is that with the 9650?
Andrew
Sorry no, its with software raid 5 and the 965 chipset + three SATA PCI-e
On Sat, 14 Jul 2007, Bill Davidsen wrote:
Bryan Christ wrote:
My apologies if this is not the right place to ask this question. Hopefully
it is.
I created a RAID5 array with:
mdadm --create /dev/md0 --level=5 --raid-devices=5 /dev/sda1 /dev/sdb1
/dev/sdc1 /dev/sdd1 /dev/sde1
mdadm -D
On Sat, 14 Jul 2007, Andrew Klaassen wrote:
--- Justin Piszcz [EMAIL PROTECTED] wrote:
On Fri, 13 Jul 2007, Andrew Klaassen wrote:
--- Justin Piszcz [EMAIL PROTECTED] wrote:
To give you an example I get 464MB/s write and
627MB/s with a 10 disk
raptor software raid5
On Sat, 14 Jul 2007, Andrew Klaassen wrote:
--- Justin Piszcz [EMAIL PROTECTED] wrote:
03:00.0 RAID bus controller: Silicon Image, Inc. SiI
3132 Serial ATA Raid
II Controller (rev 01)
$19.99 2 port SYBA cards (Silicon Image 3132s)
http://www.directron.com/sdsa2pex2ir.html
Cool, thanks
On Sat, 14 Jul 2007, Bill Davidsen wrote:
Justin Piszcz wrote:
On Sat, 14 Jul 2007, Bill Davidsen wrote:
Bryan Christ wrote:
My apologies if this is not the right place to ask this question.
Hopefully it is.
I created a RAID5 array with:
mdadm --create /dev/md0 --level=5 --raid
On Sat, 14 Jul 2007, jeff stern wrote:
hi, everyone.. i have a problem.
SUMMARY
i've got a linux software RAID1 setup, with 2 SATA drives (/dev/sdf1,
/dev/sdg1) set up to be /dev/md0. these 2 drives together hold my
/home directories. the / and / partitions are on another drive, a
standard
On Sat, 14 Jul 2007, Mr. James W. Laferriere wrote:
Hello All , I was under the impression that a 'machine check' would
be caused by some near to the CPU hardware failure , Not a bad disk ?
I was also under the impression that software raid s/b a little more
resilient than this .
But
On Fri, 13 Jul 2007, Joshua Baker-LePain wrote:
My new system has a 3ware 9650SE-24M8 controller hooked to 24 500GB WD
drives. The controller is set up as a RAID6 w/ a hot spare. OS is CentOS 5
x86_64. It's all running on a couple of Xeon 5130s on a Supermicro X7DBE
motherboard w/ 4GB of
On Fri, 13 Jul 2007, mail wrote:
Hi List,
I am very new to raid, and I am having a problem.
I made a raid10 array, but I only used 2 disks. Since then, one failed,
and my system crashes with a kernel panic.
I copied all the data, and I would like to start over. How can I start
from
mdadm --create \
--verbose /dev/md3 \
--level=5 \
--raid-devices=10 \
--chunk=1024 \
--force \
--run
/dev/sd[cdefghijkl]1
Justin.
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Justin Piszcz wrote:
The results speak for themselves:
http
10 disks total.
Justin.
On Thu, 28 Jun 2007, David Chinner wrote:
On Wed, Jun 27, 2007 at 07:20:42PM -0400, Justin Piszcz wrote:
For drives with 16MB of cache (in this case, raptors).
That's four (4) drives, right?
If so, how do you get a block read rate of 578MB/s from
4 drives? That's
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Justin Piszcz wrote:
mdadm --create \
--verbose /dev/md3 \
--level=5 \
--raid-devices=10 \
--chunk=1024 \
--force \
--run
/dev/sd[cdefghijkl]1
Justin.
Interesting, I came up with the same results (1M
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Justin Piszcz wrote:
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Interesting, I came up with the same results (1M chunk being superior)
with a completely different raid set with XFS on top:
...
Could it be attributed to XFS itself?
Peter
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Justin Piszcz wrote:
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Interesting, I came up with the same results (1M chunk being superior)
with a completely different raid set with XFS on top:
...
Could it be attributed to XFS itself?
Peter
On Thu, 28 Jun 2007, Justin Piszcz wrote:
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Justin Piszcz wrote:
On Thu, 28 Jun 2007, Peter Rabbitson wrote:
Interesting, I came up with the same results (1M chunk being superior)
with a completely different raid set with XFS on top
On Thu, 28 Jun 2007, Matti Aarnio wrote:
On Thu, Jun 28, 2007 at 10:24:54AM +0200, Peter Rabbitson wrote:
Interesting, I came up with the same results (1M chunk being superior)
with a completely different raid set with XFS on top:
mdadm --create \
--level=10 \
--chunk=1024
Still reviewing but it appears 8 + 256k looks good.
p34-noatime-logbufs=2-lbsize=256k,15696M,78172.3,99,450320,86.6667,178683,29,79808,99,565741,42.,610.067,0,16:10:16/64,2362,19.6667,15751.7,46,3993.33,22,2545.67,24.,13976,41,3781.33,28.6667
The results speak for themselves:
http://home.comcast.net/~jpiszcz/chunk/index.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
For drives with 16MB of cache (in this case, raptors).
Justin.
On Wed, 27 Jun 2007, Justin Piszcz wrote:
The results speak for themselves:
http://home.comcast.net/~jpiszcz/chunk/index.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message
,33791.1,43.5556,176630,37.,72235.1,11.5556,34424.9,44,247925,18.,271.644,0,16:10:16/64,560,4.9,2928,8.9,1039.56,5.8,571.556,5.3,1729.78,5.3,1289.33,9.3
On Wed, 27 Jun 2007, Justin Piszcz wrote:
For drives with 16MB of cache (in this case, raptors).
Justin.
On Wed, 27 Jun 2007
If you set the stripe_cache_size less than or equal to the chunk size of
the SW RAID5 array, the processes will hang in D-state indefinitely until
you change the stripe_cache_size to chunk_size.
Tested with 2.6.22-rc6 and a 128 KiB RAID5 Chunk Size, when I set it to
256 KiB, no problems.
There is some kind of bug, also tried with 256 KiB, it ran 2 tests
(bonnie++) OK but then on the third, BANG, bonnie++ is now in D-state,
pretty nasty bug there.
On Tue, 26 Jun 2007, Justin Piszcz wrote:
If you set the stripe_cache_size less than or equal to the chunk size of the
SW RAID5
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does the patch do?
In the mean time, do this (or better to set it to 30, for instance):
# Set minimum and maximum raid rebuild speed to 60MB/s.
echo Setting minimum
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed.
What does the patch do?
In the mean time, do this (or better to set it to 30
:)
Justin.
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Tue, 26 Jun 2007, Justin Piszcz wrote:
On Tue, 26 Jun 2007, Jon Nelson wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
Neil has a patch for the bad speed
mdadm /dev/md0 --fail /dev/sda1
On Tue, 26 Jun 2007, Maurice Hilarius wrote:
Good day all.
Scenario:
Pair of identical disks.
partitions:
Disk 0:
/boot - NON-RAIDed
swap
/ - rest of disk
Disk 01
/boot1 - placeholder to take same space as /boot on disk0 - NON-RAIDed
swap
/ - rest of disk
I
On Mon, 25 Jun 2007, Bill Davidsen wrote:
Justin Piszcz wrote:
I have found a 16MB stripe_cache_size results in optimal performance after
testing many many values :)
We have discussed this before, my experience has been that after 8 x stripe
size the performance gains hit diminishing
On Mon, 25 Jun 2007, Justin Piszcz wrote:
On Mon, 25 Jun 2007, Bill Davidsen wrote:
Justin Piszcz wrote:
I have found a 16MB stripe_cache_size results in optimal performance after
testing many many values :)
We have discussed this before, my experience has been that after 8 x stripe
On Mon, 25 Jun 2007, Justin Piszcz wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
On Mon, 25 Jun 2007, Bill Davidsen wrote:
Justin Piszcz wrote:
I have found a 16MB stripe_cache_size results in optimal performance
after testing many many values :)
We have discussed this before
On Mon, 25 Jun 2007, Justin Piszcz wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
On Mon, 25 Jun 2007, Justin Piszcz wrote:
On Mon, 25 Jun 2007, Bill Davidsen wrote:
Justin Piszcz wrote:
I have found a 16MB stripe_cache_size results in optimal performance
after testing many
On Mon, 25 Jun 2007, Jon Nelson wrote:
On Thu, 21 Jun 2007, Jon Nelson wrote:
On Thu, 21 Jun 2007, Raz wrote:
What is your raid configuration ?
Please note that the stripe_cache_size is acting as a bottle neck in some
cases.
Well, that's kind of the point of my email. I'll try to
I have found a 16MB stripe_cache_size results in optimal performance after
testing many many values :)
On Fri, 22 Jun 2007, Raz wrote:
On 6/22/07, Jon Nelson [EMAIL PROTECTED] wrote:
On Thu, 21 Jun 2007, Raz wrote:
What is your raid configuration ?
Please note that the stripe_cache_size
Dave,
Questions inline and below.
On Mon, 18 Jun 2007, David Chinner wrote:
On Fri, Jun 15, 2007 at 04:36:07PM -0400, Justin Piszcz wrote:
Hi,
I was wondering if the XFS folks can recommend any optimizations for high
speed disk arrays using RAID5?
[sysctls snipped]
None of those options
On Mon, 18 Jun 2007, Mike wrote:
I'm creating a larger backup server that uses bacula (this
software works well). The way I'm going about this I need
lots of space in the filesystem where temporary files are
stored. I have been looking at the Norco (link at the bottom),
but there seem to be
On Mon, 18 Jun 2007, Dexter Filmore wrote:
On Monday 18 June 2007 17:22:06 David Greaves wrote:
Dexter Filmore wrote:
1661 minutes is *way* too long. it's a 4x250GiB sATA array and usually
takes 3 hours to resync or check, for that matter.
So, what's this?
kernel, mdadm verisons?
I seem
On Thu, 14 Jun 2007, Luca Berra wrote:
On Wed, Jun 13, 2007 at 07:50:06AM -0400, Justin Piszcz wrote:
You don't even need that, just do this:
1. echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' mdadm.conf
do _NOT_ do the above, _never_.
recent mdadm do not need the DEVICE line
for older one use
On Thu, 14 Jun 2007, Rich Walker wrote:
Justin Piszcz [EMAIL PROTECTED] writes:
On Thu, 14 Jun 2007, Rich Walker wrote:
[snip]
The array is used as a single PV/VG for LVM.
What I want to do is to
(a) reduce the PV/VG so it would fit in 160*3 rather than 160*4
(b) remove the last 160GB
I would think that
mdadm --grow --raid-devices=4 -z max /dev/md1 /dev/hdg2 /dev/hde2
/dev/sda2 /dev/hdk2
Make sure the new device is added as a spare first the -z should not be
needed, by default it should use all space on the new drive.
On Thu, 14 Jun 2007, Justin Piszcz wrote
You don't even need that, just do this:
1. echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' mdadm.conf
2. mdadm --examine --scan --config=mdadm.conf
This will search all partitions and give the relevant SW raid information:
ARRAY /dev/md/4 level=raid5 metadata=1 num-devices=5
On Mon, 11 Jun 2007, Dexter Filmore wrote:
I recently upgraded my file server, yet I'm still unsatisfied with the write
speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB of RAM.
The four RAID disks are attached to the board's onbaord sATA controller
(Sil3114 attached via
On Mon, 11 Jun 2007, Dexter Filmore wrote:
On Monday 11 June 2007 14:47:50 Justin Piszcz wrote:
On Mon, 11 Jun 2007, Dexter Filmore wrote:
I recently upgraded my file server, yet I'm still unsatisfied with the
write speed.
Machine now is a Athlon64 3400+ (Socket 754) equipped with 1GB
101 - 200 of 339 matches
Mail list logo