RE: Abysmal write performance on HW RAID5

2007-12-02 Thread Daniel Korstad


 -Original Message-
 From: ChristopherD [mailto:[EMAIL PROTECTED]
 Sent: Sunday, December 02, 2007 4:03 AM
 To: linux-raid@vger.kernel.org
 Subject: Abysmal write performance on HW RAID5
 
 
 In the process of upgrading my RAID5 array, I've run into a brick wall 
(
 4MB/sec avg write perf!) that I could use some help figuring out.  
I'll
 start with the quick backstory and setup.
 
 Common Setup:
 
 Dell Dimension XPS T800, salvaged from Mom. (i440BX chipset, Pentium3 
@
 800MHZ)
 768MB DDR SDRAM @ 100MHZ FSB  (3x256MB DIMM)
 PCI vid card (ATI Rage 128)
 PCI 10/100 NIC (3Com 905)
 PCI RAID controller (LSI MegaRAID i4 - 4 channel PATA)
 4 x 250GB (WD2500) UltraATA drives, each connected to separate 
channels on
 the controller
 Ubuntu Feisty Fawn
 
 In the LSI BIOS config, I setup the full capacity of all four drives 
as a
 single logical disk using RAID5 @ 64K strips size.  I installed the OS
 from
 the CD, allowing it to create a 4GB swap partition (sda2) and use the 
rest
 as a single ext3 partition (sda1) with roughly 700GB space.
 
 This setup ran fine for months as my home fileserver.  Being new to 
RAID
 at
 the time, I didn't know or think about tuning or benchmarking, etc, 
etc.
 I
 do know that I often moved ISO images to this machine from my gaming 
rig
 using both SAMBA and FTP, with xfer limited by the 100MBit LAN
 (~11MB/sec).

That sounds about right; 11MB * 8 (bit/Byte) = 88Mbit on your 100M LAN.

 
 About a month or so ago, I hit capacity on the partition.  I dumped 
some
 movies off to a USB drive (500GB PATA) and started watching the drive
 aisle
 at Fry's.  Last week, I saw what I'd been waiting for: Maxtor 500GB 
drives
 @
 $99 each.  So, I bought three of them and started this adventure.
 
 
 I'll skip the details on the pain in the butt of moving 700GB of data 
onto
 various drives of various sizes...the end result was the following 
change
 to
 my setup:
 
 3 x Maxtor 500GB PATA drives (7200rpm, 16MB cache)
 1 x IBM/Hitachi Deskstar 500GB PATA (7200rpm, 8MB cache)
 
 Each drive still on a separate controller channel, this time 
configured
 into
 two logical drives:
 Logical Disk 1:  RAID0, 16GB, 64K stripe size (sda)
 Logical Disk 2:  RAID5, 1.5TB, 128K stripe size (sdb)
 
 
 I also took this opportunity to upgrade to the newest Ubuntu 7.10 
(Gutsy),
 and having done some reading, planned to make some tweaks to the 
partition
 formats.  After fighting with the standard CD, which refused to 
install
 the
 OS without also formatting the root partition (but not offering any
 control
 of the formatting), i downloaded the alternate CD and used the 
textmode
 installer.
 
 I set up the partitions like this:
 sda1: 14.5GB ext3, 256MB journal (mounted data_ordered), 4K block 
size,
 stride=16, sparse superblocks, no resize_inode, 1GB reserved for root
 sda2: 1.5GB linux swap
 sdb1: 1.5TB ext2, largefile4 (4MB per inode), stride=32, sparse
 superblocks,
 no resize_inode, 0 reserved for root
 
 The format command was my first hint of a problem.  The block group
 creation
 counter spun very rapidly up to 9800/11600 and then paused and I heard 
the
 drives thrash.  The block groups completed at a slower pace, and then 
the
 final creation process took several minutes.
 
 But the real shocker was transferring my data onto this new partition.
 FOUR
 MEGABYTES PER SECOND?!?!
 
 My initial plan was to plug a single old data drive into the 
motherboard's
 ATA port, thinking the transfer speed within a single machine would be 
the
 fastest possible mechanism.  Wrong.  I ended up mounting the drives 
using
 USB enclosures to my laptop (RedHat EL 5.1) and sharing them via NFS.
 
 So, deciding the partition was disposable (still unused), I fired up 
dd to
 run some block device tests:
 dd if=/dev/zero of=/dev/sdb bs=1M count=25
 
 This ran silently and showed 108MB/sec??  OK, that beats 4...let's try
 again!  Now I hear drive activity, and the result says 26MB/sec.  
Running
 it
 a third time immediately brought the rate down to 4MB/sec.  
Apparently,
 the
 first 64MB or so runs nice and fast (cache? the i4 only has 16MB 
onboard).
 
 I also ran iostat -dx in the background during a 26GB directory copy
 operation, reporting on 60-sec intervals.  This is a typical output:
 
 Device:rrqm/s  wrqm/sr/sw/srMB/s  wMB/s  avgrq-sz  
avgqu-
 sz
 awaitsvctm  %util
 sda  0.00 0.18  0.00  0.48   0.00   0.0011.03
 0.01 21.6616.73   0.61
 sdb  0.00 0.72  0.03  64.28  0.00   3.95   125.43
 137.572180.23  15.85   100.02
 
 
 So, the RAID5 device has a huge queue of write requests with an 
average
 wait
 time of more than 2 seconds @ 100% utilization?  Or is this a bug in
 iostat?
 
 At this point, I'm all ears...I don't even know where to start.  Is 
ext2
 not
 a good format for volumes of this size?  Then how to explain the block
 device xfer rate being so bad, too?  Is it that I have one drive in 
the
 array that's a different 

RE: Offtopic: hardware advice for SAS RAID6

2007-11-21 Thread Daniel Korstad
I would be very interested to hear how that card works or other suggestions 
discussions too.
 
I have a 10 disk RAID 6 all inside a large case, using on board MB SATA and a 
couple 4 port PCI SATA cards.  For one, my PCI bus is saturated and a bottle 
neck and it is a bit of a pain to replace drives by having to open the case 
each time.
 
I have been running this for years without issues, but would like to upgrade 
the system at some point and would like to use mdadm software raid, with an 
external enclosure allowing easy drive swapping.
 
The most I can contribute to this thread are SATA Multilane Enclosures I have 
been eyeing.
http://www.pc-pitstop.com/sata_enclosures/
 
But like you, I need a card that is Linux friendly and provides hard drive raid 
control to mdadm.
 
I'll follow this one and would like to hear about your experiences with the SAS 
card you choose.

- Original Message -
From: Richard Michael [EMAIL PROTECTED]
Sent: Tue, 11/20/2007 10:08am
To: linux-raid@vger.kernel.org
Subject: Offtopic: hardware advice for SAS RAID6

On the heels of last week's post asking about hardware recommendations,
I'd like to ask a few questions too. :)

I'm considering my first SAS purchase.  I'm planning to build a software
RAID6 array using a SAS JBOD attached to a linux box.  I haven't decided
on any of the hardware specifics.

I'm leaning toward this PCI express LSI 3801e controller:
http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/lsisas3801e/index.html

Although, Adaptec has a similar PCI-X model.

I'd probably purchase a cheap Dell rackmount 1U server (e.g. PowerEdge
860) for the controller.  It has dual Gb ethernet, which I'd channel
bond for decent network I/O performance.

The JBOD would ideally be 1U or 2U holding 8 or 10 disks.  If I
understand SAS correctly, I'd probably have a unit with 2 SFF-8088
miniSAS connectors (although, I believe these connectors only support 4
devices, so if the JBOD is 8 disks, I don't know what would happen).

I'm completely undecided on the JBOD itself; recommendations here would
be greatly appreciated.  It's a bit of a shot in the dark.

I'd appreciate feedback and suggestions on the hardware above, or a
discussion of the performance.  (E.g. a discussion of SAS
ports/bandwith, PCI express lanes/bandwidth, disks and network to
determine the through put of this setup.)

Cheers,
Richard
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: SWAP file on a RAID-10 array possible?

2007-08-15 Thread Daniel Korstad
I used this site to bring my existing Linux install to a RAID 1.   It worked 
great for me.
 
http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
 
 
 
- Original Message -
From: [EMAIL PROTECTED] on behalf of Tomas France 
Sent: Wed, 8/15/2007 5:28am
To: linux-raid@vger.kernel.org
Subject: Re: SWAP file on a RAID-10 array possible? 
 
 
Thanks for the answer, David!

I kind of think RAID-10 is a very good choice for a swap file. For now I 
will need to setup the swap file on a simple RAID-1 array anyway, I just 
need to be prepared when it's time to add more disks and transform the whole 
thing into RAID-10... which will be big fun anyway, for sure ;)

By the way, does anyone know if there is a comprehensive how-to on software 
RAID with mdadm available somewhere? I mean a website where I could get 
answers to questions like How to convert your system from no RAID to 
RAID-1, from RAID-1 to RAID-5/10, how to setup LILO/GRUB to boot from a 
RAID-1 array etc. Don't take me wrong, I have done my homework and found 
a lot of info on the topic but a lot of it is several years old and many 
things have changed since then. And it's quite scattered too..

Tomas


- Original Message - 
From: David Greaves [EMAIL PROTECTED]
To: Tomas France [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Wednesday, August 15, 2007 12:10 PM
Subject: Re: SWAP file on a RAID-10 array possible?


 Tomas France wrote:
 Hi everyone,

 I apologize for asking such a fundamental question on the Linux-RAID list 
 but the answers I found elsewhere have been contradicting one another.

 So, is it possible to have a swap file on a RAID-10 array?
 yes.

 mkswap /dev/mdX
 swapon /dev/mdX

 Should you use RAID-10 for swap? That's philosophy :)

 David
 -
 To unsubscribe from this list: send the line unsubscribe linux-raid in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Software based SATA RAID-5 expandable arrays?

2007-07-16 Thread Daniel Korstad
You will learn a lot by building your own system and will allow you to do more 
with it as far as other services if you want.
 
However, again if you are still having problems with distro selection, 
configuration and commands, here is another NAS install solution I stumbled on.
http://www.openfiler.com
 
They appear to use a Fedora Distro, and remade it into their own.  They also 
use the mdadm packages.  
 
I have not played with this, but If I had to chose, I would use this one since 
I have had more experience with mdadm as oppose to what the freenas is using.
 
Their version of mdadm is not the very latest however.  That won't effect you 
unless you want to be able to grow your RAID.  You will need to update it.
 
https://www.openfiler.com/community/forums/viewtopic.php?id=741
 
Oh, and they do support creating RAID6 arrays
http://www.openfiler.com/screenshots/shots/RAID_Mgmt3.png
 
 
Just giving you more options.
Dan.
 
 
- Original Message -
From: Daniel Korstad 
Sent: Mon, 7/16/2007 7:48am
To: Michael 
Subject: RE: Software based SATA RAID-5 expandable arrays? 
 
 
Something I ran across a year ago.
http://www.freenas.org/index.php?option=com_versionsItemid=51

I played with it for a day or so and it look impressive.  The project is sill 
very much alive and they just released a new version a couple days ago.

The caveat or reason I did not use this is that I use my Linux box for so many 
other things, (Web server, Asterisk (voip), Chillispot, VMware Server, 
Firewall, ...

If you go this route, you will pretty much dedicate your box for just a NAS 
function.  The project is an ISO OS you download and install.  This greatly 
simplifies things but it ties you down a bit.

After it is built, clients connect to it in server different options you can 
configure, CIFS (this is windows file sharing or samba), FTP, NFS, RSYNCD, 
SSHD, Unision, AFP.

It also supports hard disk standby time, and advanced power management for your 
drives.

However, if that is all you really want (a NAS) and you are having issues with 
other Linux distros...  This is pretty simple to get one up and running with a 
NAS.  Nice web interface for all the configuration.

Other things to consider, I don't think it has RAID6.  Or it did not last time 
I played with it a year ago.  And I think the code is different than mdadm.  
So, you would be looking toward their forums for help if you had issues.

Also, here is the manual for you..
http://www.freenas.org/downloads/docs/user-docs/FreeNAS-SUG.pdf


Cheers,
Dan.

- Original Message -
From: [EMAIL PROTECTED] on behalf of Daniel Korstad 
Sent: Fri, 7/13/2007 1:24pm
To: big.green.jelly.bean 
Cc: davidsen ; linux-raid 
Subject: RE: Software based SATA RAID-5 expandable arrays? 


I can't speak for SuSe issues but I believe there is some confusion on the 
packages and command syntax.  

So hang on, we are going for a ride, step by step...

Check and repair are not packages per say.

You should have a package called echo.

If you run this;

echo 1

Should get a 1 echoed back at you.

For example;

[EMAIL PROTECTED] echo 1
1

Or anything else you want;

[EMAIL PROTECTED] echo check
check

Now all we are doing with this is redirecting with the  to another 
location, /sys/block/md0/md/sync_action

The difference between a double  and a single  is the  will append it to 
the end and the single  will replace the contents of the file with the value.

For example;
I will create a file called foo;

[EMAIL PROTECTED] tmp]# vi foo

In this file I add two lines of text, foo, than I will write and quit :wq

Now I will take a look at the file I just made with my vi editor...

[EMAIL PROTECTED] tmp]# cat foo
foo
foo

Great, now I run my echo command to send another value to it.

First I use the double  to just append;

[EMAIL PROTECTED] tmp]# echo foo2  foo

Now I take another look at the file;

[EMAIL PROTECTED] tmp]# cat foo
foo
foo
foo2

So, I have my first two text lines the third line foo2 appended.

Now I do this again but use just the single  to replace the file with a value.

[EMAIL PROTECTED] tmp]# echo foo3  foo

Than I look at it again;

[EMAIL PROTECTED] tmp]# cat foo
foo3

Ahh, all the other lines are gone and now I just have foo3.

So,  replaces and  appends.

How does this affect your /sys/block/md0/md/sync_action  file?  As it turns 
out, it does not matter.

Think of the proc and sys (/proc and /sys) as psuedo file system is a real 
time, memory resident file system that tracks the processes running on your 
machine and the state of your system.

So first lets go to /sys/block/

Than I will list its contents;
[EMAIL PROTECTED] ~]# cd /sys/block/
[EMAIL PROTECTED] block]# ls
dm-0  dm-3  hda  md1  ram0   ram11  ram14  ram3  ram6  ram9  sdc  sdf  sdi
dm-1  dm-4  hdc  md2  ram1   ram12  ram15  ram4  ram7  sda   sdd  sdg
dm-2  dm-5  md0  md3  ram10  ram13  ram2   ram5  ram8  sdb   sde  sdh


This will be different for you since your system will have different hardware

RE: Software based SATA RAID-5 expandable arrays?

2007-07-16 Thread Daniel Korstad
Don't forget the  or  either one will do...
 
 crontab -e 
30 3 * * Mon echo check  /sys/block/md3/md/sync_action
30 4 * * Mon echo check  /sys/block/md0/md/sync_action
30 4 * * Mon echo check  /sys/block/md1/md/sync_action
30 4 * * Mon echo check  /sys/block/md2/md/sync_action
 
- Original Message -
From: Michael 
Sent: Mon, 7/16/2007 12:34pm
To: Daniel Korstad 
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays? 
 
 
Due too the nature of the data I am storing RAID-6 is not really worth the 
extra safety and security, though it would be if I could get another 6 drives.  
Maybe then I can convert my RAID 5 into a RAID 6. 

As for openfiler, it is a great, simple package that provides all the features 
I need except that they dont include the latest kernel.  That means my 
motherboard isnt supported.  (frown).  I have installed Fedora, after all the 
hastle of SuSe, and am currently setting that up so that it can be my main OS.  
It seems great, just some of the GUI based Admin tools are cryptic in their 
function.  

I have Mirrored my boot drive, which means I have to check to see if the second 
drive can be booted from.  This is my todo list (though it does fail to mention 
SMART!), the times on the crontab have to be corrected.  

--
SAMBA
http://www.redhatmagazine.com/2007/06/26/how-to-build-a-dirt-easy-home-nas-server-using-samba/

Repair
http://www.issociate.de/board/post/391115/Observations_of_a_failing_disk.html
http://www.issociate.de/board/post/443666/how_to_deal_with_continuously_getting_more_errors?.html

Crontab (Weekly Repair Schedule)
http://www.unixgeeks.org/security/newbie/unix/cron-1.html
http://www.ss64.com/bash/crontab.html\
crontab -e 
30 3 * * Mon echo check /sys/block/md3/md/sync_action
30 4 * * Mon echo check /sys/block/md0/md/sync_action
30 4 * * Mon echo check /sys/block/md1/md/sync_action
30 4 * * Mon echo check /sys/block/md2/md/sync_action

Check Boot Info on Mirrored Drive
After you go through the install and have a bootable OS that is running
on mdadm RAID, I would test it to make sure grub was installed
correctly to both the physical drives.  If grub is not installed to
both drives, and you lose one drive down the road and if that one was
the one with grub, you will have a system that will not boot even
though it has a second drive with a copy of all the files.  If this
were to happen, you can recover by booting with a bootable linux CD or
recover disk and manually installing grub too. For example say you only
had grub installed to hda and it failed, boot with a live linux cd and
type (assuming /dev/hdd is the surviving second drive);
grub
device (hd0) /dev/hdd
root (hd0,0)
setup (hd0)
quit

System Report Email Mutt
http://www.mutt.org/
http://linux.die.net/man/8/auditd.conf


- Original Message 
From: Daniel Korstad [EMAIL PROTECTED]
To: Michael [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Monday, July 16, 2007 10:23:23 AM
Subject: RE:   Software based SATA RAID-5 expandable arrays?

You will learn a lot by building your own system and will allow you to do more 
with it as far as other services if you want.

However, again if you are still having problems with distro selection, 
configuration and commands, here is another NAS install solution I stumbled on.
http://www.openfiler.com

They appear to use a Fedora Distro, and remade it into their own.  They also 
use the mdadm packages.  

I have not played with this, but If I had to chose, I would use this one since 
I have had more experience with mdadm as oppose to what the freenas is using.

Their version of mdadm is not the very latest however.  That won't effect you 
unless you want to be able to grow your RAID.  You will need to update it.

https://www.openfiler.com/community/forums/viewtopic.php?id=741

Oh, and they do support creating RAID6 arrays
http://www.openfiler.com/screenshots/shots/RAID_Mgmt3.png


Just giving you more options.
Dan.


- Original Message -
From: Daniel Korstad 
Sent: Mon, 7/16/2007 7:48am
To: Michael 
Subject: RE: Software based SATA RAID-5 expandable arrays? 


Something I ran across a year ago.
http://www.freenas.org/index.php?option=com_versionsItemid=51

I played with it for a day or so and it look impressive.  The project is sill 
very much alive and they just released a new version a couple days ago.

The caveat or reason I did not use this is that I use my Linux box for so many 
other things, (Web server, Asterisk (voip), Chillispot, VMware Server, 
Firewall, ...

If you go this route, you will pretty much dedicate your box for just a NAS 
function.  The project is an ISO OS you download and install.  This greatly 
simplifies things but it ties you down a bit.

After it is built, clients connect to it in server different options you can 
configure, CIFS (this is windows file sharing or samba), FTP, NFS, RSYNCD, 
SSHD, Unision, AFP.

It also supports hard disk standby time

RE: Software based SATA RAID-5 expandable arrays?

2007-07-13 Thread Daniel Korstad
To run it manually;

echo check  /sys/block/md0/md/sync_action

than you can check the status with;

cat /proc/mdstat

Or to continually watch it, if you want (kind of boring though :) )

watch cat /proc/mdstat

This will refresh ever 2sec.

In my original email I suggested to use a crontab so you don't need to remember 
to do this every once in a while.

Run (I did this in root);

crontab -e 

This will allow you to edit you crontab. Now past this command in there;

30 2 * * Mon echo  check /sys/block/md0/md/sync_action

If you want you can add comments, I like to comment my stuff since I have lots 
of stuff in mine, just make sure you have '#' in the front of the lines so your 
system knows it is just a comment and not a command it should run;

#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information

After you have put this in your crontab, write and quit with this command;

:wq

It should come back with this;
[EMAIL PROTECTED] ~]# crontab -e
crontab: installing new crontab

Now you can look at your cron table (without editing) with this;

crontab -l

It should return something like this, depending if you added comments or how 
you scheduled your command;

#check for bad blocks once a week (every Mon at 2:30am)
#if bad blocks are found, they are corrected from parity information
30 2 * * Mon echo  check /sys/block/md0/md/sync_action

For more info on crontab and syntax for times (I just did a google and grabbed 
the first couple links...);
http://www.tech-geeks.org/contrib/mdrone/croncrontab-howto.htm
http://ubuntuforums.org/showthread.php?t=102626highlight=cron

Cheers,
Dan.

-Original Message-
From: Michael [mailto:[EMAIL PROTECTED] 
Sent: Thursday, July 12, 2007 5:43 PM
To: Bill Davidsen; Daniel Korstad
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays?

SuSe uses its own version of cron which is different then everything else I 
have seen, and the documentation is horrible.  However they provide a 
wonderfull xwindows utility that helps set them up... the problem Im having is 
figuring out what to run.  When I try to run /sys/block/md0/md/sync_action 
under a prompt it shoots out a permission denied even though I am SU or logged 
in under Root.  Very annoying.  You mention Check vrs Repair... which brings me 
too my last issue on setting up this machine.  How do you send an email when 
Check, SMART, and when a RAID drive fails?  How do you auto repair if the Check 
fails?

These are the last things I need to do for my Linux Server to work right... 
after I get all of this done, I will change the boot to goto the command prompt 
and not XWindows, and I will leave it in the corner of my room hopefully not to 
be used for as long as possible.

- Original Message 
From: Bill Davidsen [EMAIL PROTECTED]
To: Daniel Korstad [EMAIL PROTECTED]
Cc: Michael [EMAIL PROTECTED]; linux-raid@vger.kernel.org
Sent: Wednesday, July 11, 2007 10:21:42 AM
Subject: Re: Software based SATA RAID-5 expandable arrays?

Daniel Korstad wrote:
 You have lots of options.  This will be a lengthy response and will give just 
 some ideas for just some of the options...
  
   
Just a few thoughts below interspersed with your comments.
 For my server, I had started out with a single drive.  I later migrated to 
 migrate to a RAID 1 mirror (after having to deal with reinstalls after drive 
 failures I wised up).  Since I already had an OS that I wanted to keep, my 
 RAID-1 setup was a bit more involved.  I following this migration to get me 
 there;
 http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm
  
 Since you are starting from scratch, it should be easier for you.  Most 
 distros will have an installer that will guide you though the process.  When 
 you get to hard drive partitioning, look for an advance option or review and 
 modify partition layout option or something similar otherwise it might just 
 make a guess of what you want and that would not be RAID.  In this advance 
 partition setup, you will be able to create your RAID.  First you make equal 
 size partitions on both physical drives.  For example, first carve out 100M 
 partition on each of the two physical OS drives, than make a RAID 1 md0 with 
 each of this partitions and than make this your /boot.  Do this again for 
 other partitions you want to have RAIDed.  You can do this for /boot, /var, 
 /home, /tmp, /usr.  This is can be nice to have a separations incase a user 
 fills /home/foo with crap and this will not effect other parts of the OS, or 
 if mail spool fills up, it will not hang the OS.  Only problem it
 determining how big to make them during the install.  At a minimum, I would do 
three partitions; /boot, swap, and /  This means all the others (/var, /home, 
/tmp, /usr) are in the / partition but this way you don't have to worry about 
sizing them all correctly. 
  
 For the simplest setup, I would do RAID 1

RE: Software based SATA RAID-5 expandable arrays?

2007-07-13 Thread Daniel Korstad
 into a router;
echo 1  /proc/sys/net/ipv4/ip_forward



As for SuSe updating your kernel, removing your original one and breaking your 
box by dropping you to a limited shell on boot up..  I can't help you much 
there.  I don't have SuSe but as I understand, they are a good distro.  In my 
current distro, Fedora, you can tell the update manager to not update the 
kernel.  Also in Fedora, it will keep your old kernel by default so if there 
was an issue, you can select to go back to it in the grub boot up menu.  I 
believe Ubuntu is similar.  I bet you could configure SuSe to do the same.

I hope that clears up some confusion and good luck.

Dan.

-Original Message-
From: Michael [mailto:[EMAIL PROTECTED] 
Sent: Friday, July 13, 2007 11:48 AM
To: Daniel Korstad
Cc: davidsen; linux-raid
Subject: Re: Software based SATA RAID-5 expandable arrays?

RESPONSE

I had everything working, but it is evident that when I installed SuSe
the first time check and repair where not included in the package:(  I
did not use the  I used , as was incorrectly stated in
many documentations I set up.



The thing that made me suspect check and repair wasn't part of sues was
the failure of check or repair typed at the command prompt to
respond in any kind other then a response that stated their was no
command.  In addition man check and man repair was also missing.



BROKEN!

I did an auto update of the SuSe machine, which ended up replacing the
kernel.  They added the new entries to the boot choices but the mount
information was not transfered.  SuSe also deleted the original kernel
boot setup.  When suse looked at the drives individually they found
that none of them was recognizable.  Therefor when I woke up this
morning and rebooted the machine after the update, I received the
errors and then dumps me to a basic prompt with limited ability to do
anything.  I know I need to manually remount the drives, but its going
to be a challenge since I did not do this in the past.  The answer to
this question is that I either have to change distro's (which I am
tempted to do) or fix the current distro.  Please do not bother
providing any solutions for I simply have to RTFM (which I haven't had
time to do).



I think I am going to reset up my machines.  The first two drives with
identical boot partitions, yet not mirror them.  I can then manually
run a tree copy that would update my second drive as I grow the
system, and after successfull and needed updates.  This would then
allow me a fall back after any updates, and with simply swapping SATA
drive cables from the first boot drive too the second.  I am assuming
this will work.  I then can RAID-6 (or 5) in the setup, recopy my files
(yes I haven't deleted them because I am not confident in my ability
with Linux yet.).  Hopefully I will just simply remount these 4 drives
because there a simple raid 5 array.



SUSE's COMPLETE FAILURES

This frustration with SuSe, the lack of a simple reliable update
utility and the failures I experience has discouraged me from using
SuSe at all.  Its got some amazing tools that help me from constantly
looking up documentation, posting to forums, or going to IRC, but the
unreliable upgrade process is a deal breaker for me.  Its simply to
much work to manually update everything.  This project had a simple
goal, which was to provide an easy and cheap solution to an unlimited
NAS service.



SUPPORT

In addition, SuSe's IRC help channel is among the worst I have
encountered.  The level of support is often very good, but the level of
harassment, flames and simple childish behavior overcomes almost any
attempt at providing any level of support.  I have no problem giving
back to the community when I learn enough to do so, but I will not be
mocked for my inability to understand a new and very in depth system. 
In fact, I tend to goto the wonderful gentoo irc for my answers.  The
IRC is amazing, the people patient and encouraging, the level of
knowledge is the best I have experienced.  This resource, outside the
original incident, has been an amazing resource.  I feel highly
confident asking questions about RAID here, because I know you guys are
actually RUNNING systems that I am attempting to do.

- Original Message 
From: Daniel Korstad [EMAIL PROTECTED]
To: big.green.jelly.bean [EMAIL PROTECTED]
Cc: davidsen [EMAIL PROTECTED]; linux-raid linux-raid@vger.kernel.org
Sent: Friday, July 13, 2007 11:22:45 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?

To run it manually;

echo check  /sys/block/md0/md/sync_action

than you can check the status with;

cat /proc/mdstat

Or to continually watch it, if you want (kind of boring though :) )

watch cat /proc/mdstat

This will refresh ever 2sec.

In my original email I suggested to use a crontab so you don't need to remember 
to do this every once in a while.

Run (I did this in root);

crontab -e 

This will allow you to edit you crontab. Now past this command in there;

30 2 * * Mon echo  check /sys

RE: Software based SATA RAID-5 expandable arrays?

2007-07-11 Thread Daniel Korstad
  
That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 
reshape arrived.
 
I have reshaped (added additional drives) to my RAID 6 twice now with no 
problems in the past few months.
 
You mentioned that as the only disadvantage.  There are other things to 
consider.  The overhead for parity of course.  You can't have a RAID 6 with 
only three drives unless you build it with a missing drive and run degraded.  
Also (my opinion) it might not worth the overhead with only 4 drives, unless 
you plan to reshape (add drives) down the road.  When you have an array with 
several drives, than it is more advantages as the percentage of disk space lost 
to parity goes down [((2/N)*100) where N is the number of drives in the array] 
so your storage efficiency increases ((Number of Drives -2)/Number of Drives).  
And with more drives the statistics of getting hit with a bit error after you 
lose a drive and you are trying to rebuild increases.
 
Also, there is a very slight performance drop for write speeds on RAID6 since 
you are calculating p and q parity. 

But for what I use my system for, family digital photos, file storage and media 
server I mostly read data and not bothered with slight performance hit in write.
 
I have been using RAID6 with 10 disk for over a year and it has saved me at 
least once.  
 
As far as converting the RAID6 to RAID5 or RAID4...  Never had a need to do 
this, but no probably not.
 
Dan.

 

- Inline Message Follows -
To: Daniel Korstad ; Michael 
Cc: linux-raid@vger.kernel.org
From: jahammonds prost
Subject: Re: Software based SATA RAID-5 expandable arrays?



 Why do I use RAID6?  For the extra redundancy 

I've been thinking about RAID6 too, having been bitten a couple of times 
the only disadvantage that I can see at the moment is that you can't convert 
and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive 
RAID6 one when I add an additional drive... I also don't think that you can 
grow a RAID6 array at the moment - I'd want to add additional drives over a few 
months as they come on sale Or am I wrong on both counts?


Graham
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Software based SATA RAID-5 expandable arrays?

2007-07-11 Thread Daniel Korstad
Currently, no you can't.
 
However it is on the TODO list.
 http://neil.brown.name/blog/20050727143147-003
 
Maybe by the end of the year, Neil hit his goal on the raid6 grow for kernel 
2.6.21... But Neil states the raid 5 to raid 6 is more complex to implement...
 
Dan.
 
- Original Message -
From: jahammonds prost 
Sent: Wed, 7/11/2007 12:26pm
To: Daniel Korstad 
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays? 
 
 
Ahh... guess it's time to upgrade again My plan was to start off with 3 
drives in a RAID5, and slowly grow it up to maybe 6 or 7 drives before 
converting it over to a RAID6, and then topping it out at 12 drives (all I can 
fit in the case) The performace hit isn't going to bother me too much - 
it's mainly going to be for video for my media server for the house...

So.. Can I expand a RAID6 now, which is good But can I change from RAID5 to 
RAID6 whilst online?


Graham

- Original Message 
From: Daniel Korstad [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Wednesday, 11 July, 2007 11:03:34 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?


That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 
reshape arrived.

I have reshaped (added additional drives) to my RAID 6 twice now with no 
problems in the past few months.

You mentioned that as the only disadvantage.  There are other things to 
consider.  The overhead for parity of course.  You can't have a RAID 6 with 
only three drives unless you build it with a missing drive and run degraded.  
Also (my opinion) it might not worth the overhead with only 4 drives, unless 
you plan to reshape (add drives) down the road.  When you have an array with 
several drives, than it is more advantages as the percentage of disk space lost 
to parity goes down [((2/N)*100) where N is the number of drives in the array] 
so your storage efficiency increases ((Number of Drives -2)/Number of Drives).  
And with more drives the statistics of getting hit with a bit error after you 
lose a drive and you are trying to rebuild increases.

Also, there is a very slight performance drop for write speeds on RAID6 since 
you are calculating p and q parity. 

But for what I use my system for, family digital photos, file storage and media 
server I mostly read data and not bothered with slight performance hit in write.

I have been using RAID6 with 10 disk for over a year and it has saved me at 
least once.  

As far as converting the RAID6 to RAID5 or RAID4...  Never had a need to do 
this, but no probably not.

Dan.



- Inline Message Follows -
To: Daniel Korstad ; Michael 
Cc: linux-raid@vger.kernel.org
From: jahammonds prost
Subject: Re: Software based SATA RAID-5 expandable arrays?



 Why do I use RAID6?  For the extra redundancy 

I've been thinking about RAID6 too, having been bitten a couple of times 
the only disadvantage that I can see at the moment is that you can't convert 
and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive 
RAID6 one when I add an additional drive... I also don't think that you can 
grow a RAID6 array at the moment - I'd want to add additional drives over a few 
months as they come on sale Or am I wrong on both counts?


Graham
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


  ___
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Software based SATA RAID-5 expandable arrays?

2007-07-11 Thread Daniel Korstad
 And if I were a betting man, I would guess you will need to add a physical 
drive to execute a RAID5 to RAID6 conversation for adding additional parity 
even if your current RAID5 is not full of data.
 
So if your case only holds 12 Drives, I would not grow your RAID5 to 12 drives 
and expect to be able to convert to RAID6 with the same 12 drives even if they 
are not full of data.
 
But that is just my guess on a feature that does not even exist yet...
 
Dan.
 
- Original Message -
From: [EMAIL PROTECTED] on behalf of Daniel Korstad 
Sent: Wed, 7/11/2007 2:14pm
To: jahammonds prost 
Cc: linux-raid@vger.kernel.org
Subject: RE: Software based SATA RAID-5 expandable arrays? 
 
 
Currently, no you can't.

However it is on the TODO list.
http://neil.brown.name/blog/20050727143147-003

Maybe by the end of the year, Neil hit his goal on the raid6 grow for kernel 
2.6.21... But Neil states the raid 5 to raid 6 is more complex to implement...

Dan.

- Original Message -
From: jahammonds prost 
Sent: Wed, 7/11/2007 12:26pm
To: Daniel Korstad 
Cc: linux-raid@vger.kernel.org
Subject: Re: Software based SATA RAID-5 expandable arrays? 


Ahh... guess it's time to upgrade again My plan was to start off with 3 
drives in a RAID5, and slowly grow it up to maybe 6 or 7 drives before 
converting it over to a RAID6, and then topping it out at 12 drives (all I can 
fit in the case) The performace hit isn't going to bother me too much - 
it's mainly going to be for video for my media server for the house...

So.. Can I expand a RAID6 now, which is good But can I change from RAID5 to 
RAID6 whilst online?


Graham

- Original Message 
From: Daniel Korstad [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: linux-raid@vger.kernel.org
Sent: Wednesday, 11 July, 2007 11:03:34 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?


That was true up to kernel 2.6.21 and 2.6 mdadm where support for RAID 6 
reshape arrived.

I have reshaped (added additional drives) to my RAID 6 twice now with no 
problems in the past few months.

You mentioned that as the only disadvantage.  There are other things to 
consider.  The overhead for parity of course.  You can't have a RAID 6 with 
only three drives unless you build it with a missing drive and run degraded.  
Also (my opinion) it might not worth the overhead with only 4 drives, unless 
you plan to reshape (add drives) down the road.  When you have an array with 
several drives, than it is more advantages as the percentage of disk space lost 
to parity goes down [((2/N)*100) where N is the number of drives in the array] 
so your storage efficiency increases ((Number of Drives -2)/Number of Drives).  
And with more drives the statistics of getting hit with a bit error after you 
lose a drive and you are trying to rebuild increases.

Also, there is a very slight performance drop for write speeds on RAID6 since 
you are calculating p and q parity. 

But for what I use my system for, family digital photos, file storage and media 
server I mostly read data and not bothered with slight performance hit in write.

I have been using RAID6 with 10 disk for over a year and it has saved me at 
least once.  

As far as converting the RAID6 to RAID5 or RAID4...  Never had a need to do 
this, but no probably not.

Dan.



- Inline Message Follows -
To: Daniel Korstad ; Michael 
Cc: linux-raid@vger.kernel.org
From: jahammonds prost
Subject: Re: Software based SATA RAID-5 expandable arrays?



 Why do I use RAID6?  For the extra redundancy 

I've been thinking about RAID6 too, having been bitten a couple of times 
the only disadvantage that I can see at the moment is that you can't convert 
and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive 
RAID6 one when I add an additional drive... I also don't think that you can 
grow a RAID6 array at the moment - I'd want to add additional drives over a few 
months as they come on sale Or am I wrong on both counts?


Graham
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


  ___
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Software based SATA RAID-5 expandable arrays?

2007-06-18 Thread Daniel Korstad
 
Last I check expanding drives (reshaping the RAID) in a raid set within Windows 
is not supported.
 
Significant size is relative I guess, but 4-8 terabytes will not be a problem 
in either OS.
 
I run a RAID 6 (Windows does not support this either last I checked).  I 
started out with 5 drives and have reshaped it to ten drives now.  I have a few 
250G (old original drives) and many 500G drives (added and replacement drives) 
in the set.  Once all the old 250G die off and I replace them with 500G drives 
I will grow the RAID to the size of its new smallest disk, 500G.  Grow and 
Reshape are slightly different, both supported in Linux mdadm.  I have tested 
both with succcess.
 
I too use my set for media and it is not in use 90% of the time.
 
I use put this line in my /etc/rc.local to put the drives to sleep after a 
specified min of inactivity;
hdparm -S 241 /dev/sd*
The values for the -S switch are not intuitive, read the man page.  The value I 
use (241) put them to standby (spindown) after 30min.  My OS is on EIDE and my 
RAID set is all SATA, hence the splat for all SATA drives. 
 
I have been running this for a year now with my RAID set.  It works great and I 
have had no problems with mdadm waiting on drives to spinup when I access them.
 
The one caveat, be prepared to wait a few moments if the are all in spindown 
state before you can access your data.  For me with ten drives, it is always 
less than a minute, usually 30sec or so.
 
For a filesystem, I use XFS for my large media files.
 
Dan.




- Inline Message Follows -
To: linux-raid@vger.kernel.org
From: greenjelly
Subject: Software based SATA RAID-5 expandable arrays?


I am researching my option to build a Media NAS server.  Sorry for the long
message, but I wanted to provide as much details as possible to my problem,
for the best solution.  I have Bolded sections as to save people who don't
have the time to read all of this.

Option 1: Expand My current Dream Machine!
I could buying a RAID-5 Hardware card for my current system (vista ultimate
64 with a extreme 6800 and 1066mb 2 gig RAM).  The Adaptec RAID controller
(model 3805, you can search NewEgg for the infomation) will cost me near
$500 (consume 23w) and support 8 drives (I have 6).  This controller
contains a 800mhz processor with a large cache of memory.  It will support
expandable RAID-5 array!  I would also buy a 750w+ PSU (for the additional
safety and security).  The drives in this machine would be placed in shock
absorbing (noise reduction) 3 slot 4 drive bay containers with fans ( I have
2 of these) and I will be removing a IDE based Pioneer DVD Burner (1 of 3)
because of its flaky performance given the p965 intel chip set lack of
native IDE support and thus the Motherboards Micron SATA to IDE device.  Ive
already installed 4 drives in this machine (on the native MB SATA
controller) only to find a fan fail on me within days of the installation. 
One of the drives went bad (may or may not have to do with the heat).  There
are 5mm between these drives, and I would now replace both fans with higher
RPM ball baring fans for added reliability (more noise).  I would also need
to find a Freeware SMART monitor software which at this time I can not find
for Vista, to warn me of increased temps due to failure of fan, increased
environmental heat, etc.  The only option is commercial SMART monitoring
software (which may not work with the Adaptec RAID adapter.

Option 2: Build a server.
I have a copy of Windows 2003 server, which I have yet to find out if it
supports native software expandable RAID-5 arrays.  I can also use Linux
(which I have very little experience with) but have always wanted to use and
learn. 

To do either of the last two options, I would still need to buy a new power
supply for my current VISTA machine (for added reliability).  The current
PSU is 550w and with a power hungry RADEON, 3 DVD Drives and a X-Fi sound
card... My nerves are getting frayed. 

I would buy a cheap motherboard, processor and 1gig or less of RAM.  Lastly
I would want a VERY large Case.  I have a 7300 NVidia PCI card that was
replaced with a X1950GT on my Home Theater PC so that I may play back
HD/Blue Ray DVD's.

The server option may cost a bit more then the $500 for the Adaptec Raid
controller.  This will only work if Linux or Windows 2003 supports my much
needed requirements.  My Linux OS will be installed on a 40mb IDE Drive (not
part of the Array).  

The options I seek are to be able to start with a 6 Drive array RAID-5
array, then as my demand for more space increases in the future I want to be
able to plug in more drives and incorporate them into the Array without the
need to backup the data.  Basically I need the software to add the
drive/drives to the Array, then Rebuild the array incorporating the new
drives while preserving the data on the original array.

QUESTIONS
Since this is a media server, and would only be used to serve Movies and
Video to my two machines It 

RE: RAID 6 grow problem

2007-06-06 Thread Daniel Korstad
Sometimes people confuse Bus speed with actually drive speeds.  Manufactures do 
it as a marketing ploy.  There is the physical limitation for the internal 
drive with a sustained read/write speed. Higher RPMs help.  Perpendicular 
technologies will too as more information passes the head in each revolution. 
 
Than you have the interface to internal drive IDE, EIDE, SATA, SATAII, SCSI, 
... 
 
A drive with a sustained read speed at 70MB/s with a SATAII interface will 
perform the same with SATAI and IDE.  You will get a performance gain with 
SATAII on burst/buffer cached data access (for a short delta of time) but not 
sustain speed.  There is no bus bottleneck and the faster bus does not increase 
your sustained speed.  I had a PCI bus bottleneck because I have too many drive 
on that bus and too cheap to upgrade the system to PCI-express :)  Plus using 
it across the WiFi or LAN, I would not see much gain.  Only when doing task on 
the local box, I hit the bottle neck. 
 
Now RAID 5 and 6 sets with more drives will perform faster than ones with less 
drives (RAID 5 beats RAID 6 in writes, less parity to deal with).  But with all 
bus bottlenecks removed, I have not experienced a linear gain with the speed of 
one drive times the number of drives in the set equaling the total speed of the 
array.  The number is much less, there is some overhead.  And it has been my 
experience that as you add drives the gain is not linear but a curved graph 
with diminishing gains as you get to large number of drives in the set.
 
You say you have a RAID with three drive (I assume RAID5) with a read 
performance of 133MB/s.  There are lots of variables, file system type, cache 
tuning, but that sounds very reasonable to me.
 
Here is a site with some test for RAID5 and 8 drives in the set using high end 
hardware raid.
http://www.rhic.bnl.gov/hepix/talks/041019pm/schoen.pdf
8 drives RAID 5 7200 rpm SATA drives = ~180MB/s
8 drives RAID 5 1 rpm SATA drives = ~310M/s
 
With the processors speeds and multiple cores, I don't think there is much 
difference in mdadm software raid and hardware raid.  In fact some would say 
software RAID is superior depending on how the hardware XOR engine in the card 
performs.  But that too is another topic/thread (I have to stop doing that...) 
 
Dan.




- Inline Message Follows -
To: Jon Nelson 
Cc: linux-raid@vger.kernel.org
From: Neil Brown
Subject: RE: RAID 6 grow problem


On Tuesday June 5, [EMAIL PROTECTED] wrote:
 
 I have an EPoX 570SLI motherboard with 3 SATAII drives, all 320GiB: one 
 Hitachi, one Samsung, one Seagate. I built a RAID5 out of a partition 
 carved from each. I can issue a 'check' command and the rebuild speed 
 hovers around 70MB/s, sometimes up to 73MB/s, and dstat/iostat/whatever 
 confirms that each drive is sustaining approximately 70MB/s reads. 
 Therefore, 3x70MB/s = 210MB/s which is a bunch more than 133MB/s. lspci 
 -v reveals, for one of the interfaces (the others are pretty much the 
 same):

..
 
 I'm trying to determine what the limiting factor of my raid is: Is it 
 the drives, 

If look at the data sheets for the drives (I just had a look at a
Seagate one, fairly easy to find on their web site) you should fine
the Maximum Sustained Transfer Rate, which will be about 70MB/s for
current 7200rpm drives.

So I think the drive is the limiting factor.

NeilBrown
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: XFS on x86_64 Linux Question

2007-04-28 Thread Daniel Korstad


Short answer, yep, I have done it. 

 

I don't know exactly what you are looking for, or what tools you need. 
The follow is my very recent experience. 

 

I have a FC4  x86_64 that I have been using for awhile.  I have a
raided OS (raid 1on using the two separate IDE controllers) and raided data
drives (9 in a RAID6 on SATA controllers) 

 

I had replace the original OS drives with larger ones.  I wanted to take
the new larger drives’ capacity and put that in the ext3 fs on the raid 1. 

 

Last night I did this on the system; 

mdadm --grow /dev/md0 --size=max 

 

But I had older resize2fs, it does not support ext resize on a live mount
filesystem, and since this partition was my / partition... 

 

I rebooted my system with knoppix and did; 

 

mdadm --assemble /dev/md0 /dev/hda1 /dev/hdc1 

fsck.ext /dev/md0 

resize2fs /dev/md0 

 

And reboot back to operational without any problems. 

 

My data raid6 is on a XFS.  It support live filesystem expansion
(xfs_growfs while mounted).   I have not done anything with my xfs in
knoppix but I am sure some tools are in there too. 

 

This is not a thorough test of tools on knoppix on my 64bit system but from my
experience, I can boot into it and assemble my raid within it. 

 

Dan.


 

- Inline Message Follows - 

To: [EMAIL PROTECTED] 

Cc: linux-raid@vger.kernel.org 

From: Justin Piszcz 

Subject: Re: XFS on x86_64 Linux Question 

 


With correct CC'd address. 
 
 
On Sat, 28 Apr 2007, Justin Piszcz wrote: 
 
 Hello-- 
 
 Had a quick question, if I re-provision a host with an Intel Core Duo CPU  
 with x86_64 Linux; I create a software raid array and use the XFS  
 filesystem-- all in 64bit space... 
 
 If I boot a recovery image such as Knoppix, it will not be able to work on  
 the filesystem correct?  I would need a 64bit live CD? 
 
 Does the same apply to software raid?  Can I mount a software raid created in 
  
 a 64bit environment in a 32bit environment? 
 
 Justin. 
 
- 
To unsubscribe from this list: send the line unsubscribe linux-raid in 
the body of a message to [EMAIL PROTECTED] 
More majordomo info at  http://vger.kernel.org/majordomo-info.html 
 

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


FW: [slightly OT] expanding lvm volume group after growing raid5 array

2007-04-19 Thread Daniel Korstad
I have used lv with my past raid sets and it is very nice, adds some 
flexibility.  

I have also done a RAID5 reshape that was in an lv.  Unfortunately, at the time 
I had an older LVM version that did not support pvresize.  So I was stuck with 
a larger RAID set and my lv would not take advantage of it.

I think I needed LVM version 2.02.06 to solve that and get the pvresize 
feature. If you are running a relatively new disto that won't be an issue any 
more.  I think I had FC3 or 4.
 
To get the extra space over to your filesystem, there are a couple steps in 
between that you need to do with the LVs.

You need to pvresize the physical volume, than the volume group will have a 
larger capacity than resize/reallocate the new capacity to the logical volume, 
than you can resize your file system inside the logical volume.


- Inline Message Follows -
To: linux-raid@vger.kernel.org
From: Gavin McCullagh
Subject: [slightly OT] expanding lvm volume group after growing raid5 array


Hi Folks,

I recently upgraded all four disks of a RAID5 array and then used mdadm
--grow to grow the raid array into the new bigger partitions available to
it and ran resize2fs.  Lovely.  

However, I've just tried this again on a similar machine and have grown the
array.  However, I've just noticed that the RAID5 array has an lvm volume
group on it with two logical volumes on that.  So now, I'd like to grow the
volume group so I can grow one of the logical volumes and grow the
filesystem therein.

I realise this is the raid group but has anyone come across how to do this?

Gavin

-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Grow a RAID-6 ?

2007-03-23 Thread Daniel Korstad
As I understand it, reshape for RAID6 is coming now.  It is in the 2.6.21 
kernel still at rc4 as of today though.  I am looking forward to it, and plan 
to give it a test run when it released.

I have used lv with my past raid sets and it is very nice, adds some 
flexibility.  

I have also done a RAID5 reshape that was in an lv.  Unfortunately, at the time 
I had an older LVM version that did not support pvresize.  So I was stuck with 
a larger RAID set and my lv would not take advantage of it.

I think I needed LVM version 2.02.06 to solve that and get the pvresize 
feature. If you are running a relatively new disto that won't be an issue any 
more.  I think I had FC3 or 4.

-Original Message-
From: Gordon Henderson [mailto:[EMAIL PROTECTED] 
Sent: Friday, March 23, 2007 11:35 AM
To: Mattias Wadenstein
Cc: linux-raid@vger.kernel.org
Subject: Re: Grow a RAID-6 ?

On Fri, 23 Mar 2007, Mattias Wadenstein wrote:

 On Fri, 23 Mar 2007, Gordon Henderson wrote:

 Are there any plans in the near future to enable growing RAID-6 arrays by 
 adding more disks into them?
 
 I have a 15x500GB - drive unit and I need to add another 15 drives into 
 it... Hindsight is telling me that maybe I should have put LVM on top of 
 the RAID-6, however, the usable 6TB it yields should have been enough for 
 anyone...

 Well, if you are doubling the space, you could take this opportunity to put 
 lvm on the new disks, move all the data, then put in the old disks as a pv, 
 extending the lvm space.

Now why didn't I think of that. *thud*

 I really wouldn't recommend having a 30-disk raid6, imagine the rebuild time 
 after a failed disk..

There is that - it would give me 2 disks (ie. 1TB) more space though...

This isn't a performance limited server though, it's an off-site backup 
box, so it just has to be reasonably reliable.

Thanks!

Gordon
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: future hardware

2006-10-27 Thread Daniel Korstad
I have a case what will fit seven HD in standard bays.  Than I have four
bays of 5.25 for DVD/CD drives, so I bought this;
http://www.newegg.com/product/product.asp?item=N82E16841101035

leaving me one 5.25 left for the fan.  In addition to the fan in the
item above, I have the exhaust fan on the Power Supply, another 12mm
exhaust fan and a 12mm intake that blows across the other HDs.

This is my current case, with a little mod for an extra drive;
http://www.newegg.com/Product/Product.asp?Item=N82E16811133133

I have ten drives in it now.  Two in a RAID1 for the OS and eight in a
RAID6.

If I were to do it again, I would buy this...
http://www.newegg.com/Product/Product.asp?Item=N82E1682064



On Fri, 2006-10-27 at 17:22 -0400, Bill Davidsen wrote:
 Dan wrote:
 
 I have been using an older 64bit system, socket 754 for a while now.  It has
 the old PCI bus 33Mhz.  I have two low cost (no HW RAID) PCI SATA I cards
 each with 4 ports to give me an eight disk RAID 6.  I also have a Gig NIC,
 on the PCI bus.  I have Gig switches with clients connecting to it at Gig
 speed.
 
 As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from that
 PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect
 
 The transfer rate is not bad across the network but my bottle neck it the
 PCI bus.  I have been shopping around for new MB and PCI-express cards.  I
 have been using mdadm for a long time and would like to stay with it.  I am
 having trouble finding an eight port PCI-express card that does not have all
 the fancy HW RAID which jacks up the cost.  I am now considering using a MB
 with eight SATA II slots onboard.  GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
 nForce 590 SLI MCP ATX.
 
 What are other users of mdadm using with the PCI-express cards, most cost
 effective solution?
 
 There may still be m/b available with multiple PCI busses. Don't know if 
 you are interested in a low budget solution, but that would address 
 bandwidth and use existing hardware.
 
 Idle curiousity: what kind of case are you using for the drives? I will 
 need to spec a machine with eight drives in the December-January timeframe.
 


signature.asc
Description: This is a digitally signed message part


Re: future hardware

2006-10-27 Thread Daniel Korstad


 I have a case what will fit seven HD in standard bays.  Than I have four
 bays of 5.25 for DVD/CD drives, so I bought this;
 http://www.newegg.com/product/product.asp?item=N82E16841101035
 
 leaving me one 5.25 left for the fan.  In addition to the fan in the
 item above, I have the exhaust fan on the Power Supply, another 12mm
 exhaust fan and a 12mm intake that blows across the other HDs.
Sorry, I too much of a hurry, those are 120cm exhaust and 120cm intake

 
 This is my current case, with a little mod for an extra drive;
 http://www.newegg.com/Product/Product.asp?Item=N82E16811133133
 
 I have ten drives in it now.  Two in a RAID1 for the OS and eight in a
 RAID6.
 
 If I were to do it again, I would buy this...
 http://www.newegg.com/Product/Product.asp?Item=N82E1682064
 
 
 
 On Fri, 2006-10-27 at 17:22 -0400, Bill Davidsen wrote:
  Dan wrote:
  
  I have been using an older 64bit system, socket 754 for a while now.  It 
  has
  the old PCI bus 33Mhz.  I have two low cost (no HW RAID) PCI SATA I cards
  each with 4 ports to give me an eight disk RAID 6.  I also have a Gig NIC,
  on the PCI bus.  I have Gig switches with clients connecting to it at Gig
  speed.
  
  As many know you get a peak transfer rate of 133 MB/s or 1064Mb/s from that
  PCI bus http://en.wikipedia.org/wiki/Peripheral_Component_Interconnect
  
  The transfer rate is not bad across the network but my bottle neck it the
  PCI bus.  I have been shopping around for new MB and PCI-express cards.  I
  have been using mdadm for a long time and would like to stay with it.  I am
  having trouble finding an eight port PCI-express card that does not have 
  all
  the fancy HW RAID which jacks up the cost.  I am now considering using a MB
  with eight SATA II slots onboard.  GIGABYTE GA-M59SLI-S5 Socket AM2 NVIDIA
  nForce 590 SLI MCP ATX.
  
  What are other users of mdadm using with the PCI-express cards, most cost
  effective solution?
  
  There may still be m/b available with multiple PCI busses. Don't know if 
  you are interested in a low budget solution, but that would address 
  bandwidth and use existing hardware.
  
  Idle curiousity: what kind of case are you using for the drives? I will 
  need to spec a machine with eight drives in the December-January timeframe.
  


signature.asc
Description: This is a digitally signed message part