Re: The SX4 challenge

2008-01-16 Thread Mark Lord
Jeff Garzik wrote: .. Thus, the "SX4 challenge" is a challenge to developers to figure out the most optimal configuration for this hardware, given the existing MD and DM work going on. .. This sort of RAID optimization hardware is not unique to the SX4, so hopefully we can work out a way to ta

The SX4 challenge

2008-01-16 Thread Jeff Garzik
Promise just gave permission to post the docs for their PDC20621 (i.e. SX4) hardware: http://gkernel.sourceforge.net/specs/promise/pdc20621-pguide-1.2.pdf.bz2 joining the existing PDC20621 DIMM and PLL docs: http://gkernel.sourceforge.net/specs/promise/pdc20621-pguide-dimm-1.6.pdf.bz2 http://g

Re: How do I get rid of old device?

2008-01-16 Thread Justin Piszcz
On Thu, 17 Jan 2008, Neil Brown wrote: On Wednesday January 16, [EMAIL PROTECTED] wrote: p34:~# mdadm /dev/md3 --zero-superblock p34:~# mdadm --examine --scan ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f463057c:9a696419:3bcb794a:7aaa12b2 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=98e494

Re: How do I get rid of old device?

2008-01-16 Thread Neil Brown
On Wednesday January 16, [EMAIL PROTECTED] wrote: > p34:~# mdadm /dev/md3 --zero-superblock > p34:~# mdadm --examine --scan > ARRAY /dev/md0 level=raid1 num-devices=2 > UUID=f463057c:9a696419:3bcb794a:7aaa12b2 > ARRAY /dev/md1 level=raid1 num-devices=2 > UUID=98e4948c:c6685f82:e082fd95:e7f45529 >

Re: How do I get rid of old device?

2008-01-16 Thread Justin Piszcz
On Wed, 16 Jan 2008, Justin Piszcz wrote: p34:~# mdadm /dev/md3 --zero-superblock p34:~# mdadm --examine --scan ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f463057c:9a696419:3bcb794a:7aaa12b2 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=98e4948c:c6685f82:e082fd95:e7f45529 ARRAY /dev/md2 le

How do I get rid of old device?

2008-01-16 Thread Justin Piszcz
p34:~# mdadm /dev/md3 --zero-superblock p34:~# mdadm --examine --scan ARRAY /dev/md0 level=raid1 num-devices=2 UUID=f463057c:9a696419:3bcb794a:7aaa12b2 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=98e4948c:c6685f82:e082fd95:e7f45529 ARRAY /dev/md2 level=raid1 num-devices=2 UUID=330c9879:73af7d

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Justin Piszcz
On Thu, 17 Jan 2008, Al Boldi wrote: Justin Piszcz wrote: On Wed, 16 Jan 2008, Al Boldi wrote: Also, can you retest using dd with different block-sizes? I can do this, moment.. I know about oflag=direct but I choose to use dd with sync and measure the total time it takes. /usr/bin/time -

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Justin Piszcz
On Thu, 17 Jan 2008, Al Boldi wrote: Justin Piszcz wrote: On Wed, 16 Jan 2008, Al Boldi wrote: Also, can you retest using dd with different block-sizes? I can do this, moment.. I know about oflag=direct but I choose to use dd with sync and measure the total time it takes. /usr/bin/time -

Re: [PATCH 001 of 6] md: Fix an occasional deadlock in raid5

2008-01-16 Thread Neil Brown
On Tuesday January 15, [EMAIL PROTECTED] wrote: > On Wed, 16 Jan 2008 00:09:31 -0700 "Dan Williams" <[EMAIL PROTECTED]> wrote: > > > > heheh. > > > > > > it's really easy to reproduce the hang without the patch -- i could > > > hang the box in under 20 min on 2.6.22+ w/XFS and raid5 on 7x750GB. >

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Al Boldi
Justin Piszcz wrote: > On Wed, 16 Jan 2008, Al Boldi wrote: > > > Also, can you retest using dd with different block-sizes? > > I can do this, moment.. > > > I know about oflag=direct but I choose to use dd with sync and measure the > total time it takes. > /usr/bin/time -f %E -o ~/$i=chunk.txt bas

(no subject)

2008-01-16 Thread Jed Davidow
unsubscribe linux-raid - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Justin Piszcz
On Wed, 16 Jan 2008, Greg Cormier wrote: What sort of tools are you using to get these benchmarks, and can I used them for ext3? Very interested in running this on my server. Thanks, Greg You can use whatever suits you, such as untar kernel source tree, copy files, untar backups, etc--, y

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Greg Cormier
What sort of tools are you using to get these benchmarks, and can I used them for ext3? Very interested in running this on my server. Thanks, Greg On Jan 16, 2008 11:13 AM, Justin Piszcz <[EMAIL PROTECTED]> wrote: > For these benchmarks I timed how long it takes to extract a standard 4.4 > GiB

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Justin Piszcz
On Wed, 16 Jan 2008, Al Boldi wrote: Justin Piszcz wrote: For these benchmarks I timed how long it takes to extract a standard 4.4 GiB DVD: Settings: Software RAID 5 with the following settings (until I change those too): Base setup: blockdev --setra 65536 /dev/md3 echo 16384 > /sys/block/m

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Al Boldi
Justin Piszcz wrote: > For these benchmarks I timed how long it takes to extract a standard 4.4 > GiB DVD: > > Settings: Software RAID 5 with the following settings (until I change > those too): > > Base setup: > blockdev --setra 65536 /dev/md3 > echo 16384 > /sys/block/md3/md/stripe_cache_size > e

Re: Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Justin Piszcz
On Wed, 16 Jan 2008, Justin Piszcz wrote: For these benchmarks I timed how long it takes to extract a standard 4.4 GiB DVD: Settings: Software RAID 5 with the following settings (until I change those too): http://home.comcast.net/~jpiszcz/sunit-swidth/newresults.html Any idea why an suni

Linux Software RAID 5 + XFS Multi-Benchmarks / 10 Raptors Again

2008-01-16 Thread Justin Piszcz
For these benchmarks I timed how long it takes to extract a standard 4.4 GiB DVD: Settings: Software RAID 5 with the following settings (until I change those too): Base setup: blockdev --setra 65536 /dev/md3 echo 16384 > /sys/block/md3/md/stripe_cache_size echo "Disabling NCQ on all disks..."