Re: Best way to test a new RAID configuration

2001-03-16 Thread Art Boulatov

David Christensen wrote:

 I've recently setup a new RAID-5 configuration and wanted to test it
 thoroughly before I commit data to it.  I'm not so worried about drive
 failures so I don't want to power down drives while the system is running,
 but I do want to test the drives out by reading/writing/verifying for a few
 days.  Anyone know of any good (easy to setup) applications for doing that,
 or perhaps a shell script that might do the same thing?
 
 David Christensen
 -
 To unsubscribe from this list: send the line "unsubscribe linux-raid" in
 the body of a message to [EMAIL PROTECTED]

Hi,

I have a setup of 2 SCSI disks with 8 partitions on each
consumed by software RAID0.

I started bonnie++ with really weird parameters
on each of the 8 meta devices at the same time.
So I had 8 bonnies++ chewing my raid0 configuration...

May be there could be more exhaustive tests,
but this one helped me to find a bad block on one
of the brand new IBM SCSI harddrive :)
Did not see any problems with RAID or reiserfs though.

Art.

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]



Re: Best way to test a new RAID configuration

2001-03-16 Thread Alvin Oga


hi ya

best way to test raid5 is to write large ( 1Gb-2Gb ) data files to it...
and than compare the files

-- oooppss... just re-read david's post skip the part about
   powering down the disks..etc...

than pull one of the disks offline
and see if it still compares...

insert a fresh disk in its place...
and see if it re-syncs while you are creating a new "large file"

-- for testing raid1 mirroring...
- write to "A" and take "a" offline
and see if the mirror ( "B" ) has the data you just wrote


taking scsi drives offline is tough ???
taking ide drives offline is easy ... use hdparm to shut it down ???

===
=== best way to make sure you dont lose the data on the Raid
=== is to have a backup somewhere else...
===

have fun raiding
alvin
http://www.Linux-1U.net ... 1U Raid5 ... 500Gb each ..
http://www.linux-consulting.com/Raid/Docs/raid_test*


On Fri, 16 Mar 2001, Derek Vadala wrote:

 On Fri, 16 Mar 2001, David Christensen wrote:
 
  I've recently setup a new RAID-5 configuration and wanted to test it
  thoroughly before I commit data to it.  I'm not so worried about drive
  failures so I don't want to power down drives while the system is running,
  but I do want to test the drives out by reading/writing/verifying for a few
  days.  Anyone know of any good (easy to setup) applications for doing that,
  or perhaps a shell script that might do the same thing?
 
 You could use Bonnie and some Perl scripts to hammer the drives for a few
 days. 
 
 ---
 Derek Vadala, [EMAIL PROTECTED], http://www.cynicism.com/~derek
 
 -
 To unsubscribe from this list: send the line "unsubscribe linux-raid" in
 the body of a message to [EMAIL PROTECTED]
 

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]



Re: Best way to test a new RAID configuration

2001-03-16 Thread Ross Vandegrift

   Anyone know of any good (easy to setup) applications for doing that,
   or perhaps a shell script that might do the same thing?

As a matter of fact, I have a very nice one right here.  Someone mailed this to the 
list back in the day when I asked this same question.  It's pretty killer.

Ross Vandegrift
[EMAIL PROTECTED]
[EMAIL PROTECTED]


#!/bin/bash -
# -*- Shell-script -*-
#
# Copyright (C) 1999 Bibliotech Ltd., 631-633 Fulham Rd., London SW6 5UQ.
#
# $Id: stress.sh,v 1.2 1999/02/10 10:58:04 rich Exp $
#
# Change log:
#
# $Log: stress.sh,v $
# Revision 1.2  1999/02/10 10:58:04  rich
# Use cp instead of tar to copy.
#
# Revision 1.1  1999/02/09 15:13:38  rich
# Added first version of stress test program.
#

# Stress-test a file system by doing multiple
# parallel disk operations. This does everything
# in MOUNTPOINT/stress.

nconcurrent=4
content=/usr/doc
stagger=yes

while getopts "c:n:s" c; do
case $c in
c)
content=$OPTARG
;;
n)
nconcurrent=$OPTARG
;;
s)
stagger=no
;;
*)
echo 'Usage: stress.sh [-options] MOUNTPOINT'
echo 'Options: -c Content directory'
echo ' -n Number of concurrent accesses (default: 4)'
echo ' -s Avoid staggerring start times'
exit 1
;;
esac
done

shift $(($OPTIND-1))
if [ $# -ne 1 ]; then
echo 'For usage: stress.sh -?'
exit 1
fi

mountpoint=$1

echo 'Number of concurrent processes:' $nconcurrent
echo 'Content directory:' $content '(size:' `du -s $content | awk '{print $1}'` 'KB)'

# Check the mount point is really a mount point.

if [ `df | awk '{print $6}' | grep ^$mountpoint\$ | wc -l` -lt 1 ]; then
echo $mountpoint: This doesn\'t seem to be a mountpoint. Try not
echo to use a trailing / character.
exit 1
fi

# Create the directory, if it doesn't exist.

echo Warning: This will DELETE anything in $mountpoint/stress. Type yes to confirm.
read line
if [ "$line" != "yes" ]; then
echo "Script abandoned."
exit 1
fi

if [ ! -d $mountpoint/stress ]; then
rm -rf $mountpoint/stress
if ! mkdir $mountpoint/stress; then
echo Problem creating $mountpoint/stress directory. Do you have sufficient
echo access permissions\?
exit 1
fi
fi

echo Created $mountpoint/stress directory.

# Construct MD5 sums over the content directory.

echo -n "Computing MD5 sums over content directory: "
( cd $content  find . -type f -print0 | xargs -0 md5sum | sort -k 2 -o 
$mountpoint/stress/content.sums )
echo done.

# Start the stressing processes.

echo -n "Starting stress test processes: "

pids=""

p=1
while [ $p -le $nconcurrent ]; do
echo -n "$p "

(

# Wait for all processes to start up.
if [ "$stagger" = "yes" ]; then
sleep $((10*$p))
else
sleep 10
fi

while true; do

# Remove old directories.
echo -n "D$p "
rm -rf $mountpoint/stress/$p

# Copy content - partition.
echo -n "W$p "
mkdir $mountpoint/stress/$p
#( cd $content  tar cf - . ) | ( cd $mountpoint/stress/$p  tar xf - )
cp -ax $content/* $mountpoint/stress/$p

# Compare the content and the copy.
echo -n "R$p "
( cd $mountpoint/stress/$p  find . -type f -print0 | xargs -0 md5sum | 
sort -k 2 -o /tmp/stress.$$.$p )
diff $mountpoint/stress/content.sums /tmp/stress.$$.$p
rm -f /tmp/stress.$$.$p
done
) 

pids="$pids $!"

p=$(($p+1))
done

echo
echo "Process IDs: $pids"
echo "Press ^C to kill all processes"

trap "kill $pids" SIGINT

wait

kill $pids
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]