On Sat, 2009-02-21 at 18:09 -0600, Les Mikesell wrote:
Yes, but raid1 in software has none of those problems, since as far as
the boot loader is concerned, you are booting from a single drive. And
there is a trade-off in complexity, since sw raid works the same on
Linux across different
You will have to prove that. I have previously posted posts with links
to benchmarks that show that hardware raid with sufficient processing
power beat the pants of software raid when it comes to raid5/6
implementations. Hardware raid cards no longer come with crappy i960 cpus.
Kay Diederichs wrote:
A good place to start comparing benchmark numbers for different RAID
levels is
http://linux-raid.osdl.org/index.php/Performance
in particular the links given in section Other benchmarks from 2007-2008
I like this bit of info from
Ian Forde wrote:
On Sat, 2009-02-21 at 18:09 -0600, Les Mikesell wrote:
Yes, but raid1 in software has none of those problems, since as far as
the boot loader is concerned, you are booting from a single drive. And
there is a trade-off in complexity, since sw raid works the same on
Linux
On Sun, Feb 22, 2009 at 7:05 PM, Ian Forde i...@duckland.org wrote:
RAID in software, whether RAID1 or RAID5/6, always has manual steps
involved in recovery. If one is using standardized hardware, such as HP
DL-x80 hardware or Dell x950 boxes, HW RAID obviates the need for a
recovery
If I have to do hardware raid, I'll definitely spec in a backup
controller. Learnt this the hard way when my raid 5 controller died
years after I first got it and I could no longer find a replacement.
For high budget projects, having the extra raid controller as
insurance isn't a big
Chan Chung Hang Christopher schrieb:
md1 will read from both disk is not true in general.
RAID1 md reads from one disk only; it uses the other one in case the
first one fails. No performance gain from multiple copies.
I beg to differ. I have disks in a raid1 md array and iostat -x 1 will
Kay Diederichs wrote:
hdparm -tT tests one type of disk access, other tools test other
aspects. I gave the hdparm numbers because everyone can reproduce them.
For RAID0 with two disks you do see - using e.g. hdparm - the doubling
of performance from two disks.
If you take the time to read
On Sat, Feb 21, 2009 at 6:04 PM, John R Pierce pie...@hogranch.com wrote:
Kay Diederichs wrote:
hdparm -tT tests one type of disk access, other tools test other
aspects. I gave the hdparm numbers because everyone can reproduce them.
For RAID0 with two disks you do see - using e.g. hdparm -
Would running two CP command to copy 2 different set of files to two
different targets suffice as a basic two thread test?
So long as you generate disk access through a file system and not hdparm.
Is there a way to monitor actual disk transfers from command line without
having to do manual
Kay Diederichs wrote:
Chan Chung Hang Christopher schrieb:
md1 will read from both disk is not true in general.
RAID1 md reads from one disk only; it uses the other one in case the
first one fails. No performance gain from multiple copies.
I beg to differ. I have disks in a
On Sat, Feb 21, 2009 at 11:42 PM, Chan Chung Hang Christopher
christopher.c...@bradbury.edu.hk wrote:
Would running two CP command to copy 2 different set of files to two
different targets suffice as a basic two thread test?
So long as you generate disk access through a file system and
Chan Chung Hang Christopher wrote:
We were talking about RAID1; RAID5/6 is a different area. Linux software
RAID1 is a safeguard against disk failure; it's not designed for speed
increase. There is a number of things that could be improved in Linux
software RAID; read performance of
On Sat, 2009-02-21 at 08:40 +0800, Chan Chung Hang Christopher wrote:
Ian Forde wrote:
I'd have to say no on the processing power for RAID 5. Moore's law has
grown CPU capabilities over the last 15 or so years. HW RAID
controllers haven't gotten that much faster because they haven't
A good place to start comparing benchmark numbers for different RAID
levels is
http://linux-raid.osdl.org/index.php/Performance
in particular the links given in section Other benchmarks from 2007-2008
HTH,
Kay
___
CentOS mailing list
Ian Forde wrote:
Might not be a bad idea to see how they're able to use
mdadm to detect and autosync drives. I don't *ever* want to go through
something like:
http://kev.coolcavemen.com/2008/07/heroic-journey-to-raid-5-data-recovery/
Not when a little planning can help me skip it... ;)
On Sat, 2009-02-21 at 17:24 -0600, Les Mikesell wrote:
Ian Forde wrote:
Might not be a bad idea to see how they're able to use
mdadm to detect and autosync drives. I don't *ever* want to go through
something like:
Ian Forde wrote:
On Sat, 2009-02-21 at 17:24 -0600, Les Mikesell wrote:
Ian Forde wrote:
Might not be a bad idea to see how they're able to use
mdadm to detect and autosync drives. I don't *ever* want to go through
something like:
Noob Centos Admin schrieb:
On Thu, Feb 19, 2009 at 4:22 AM, Ray Van Dolson ra...@bludgeon.org
mailto:ra...@bludgeon.org wrote:
The other side of the coin (as I think you mentioned) is that many are
not comfortable having LVM handle the mirroring. Are its mirroring
md1 will read from both disk is not true in general.
RAID1 md reads from one disk only; it uses the other one in case the
first one fails. No performance gain from multiple copies.
I beg to differ. I have disks in a raid1 md array and iostat -x 1 will
show reads coming off both disks.
On Fri, 2009-02-20 at 22:52 +0800, Chan Chung Hang Christopher wrote:
Bollocks. The only area in which hardware raid has a significant
performance advantage over software raid is raid5/6 given sufficient
cache memory and processing power.
I'd have to say no on the processing power for RAID
Ian Forde wrote:
On Fri, 2009-02-20 at 22:52 +0800, Chan Chung Hang Christopher wrote:
Bollocks. The only area in which hardware raid has a significant
performance advantage over software raid is raid5/6 given sufficient
cache memory and processing power.
I'd have to say no on
On Thu, Feb 19, 2009 at 4:22 AM, Ray Van Dolson ra...@bludgeon.org wrote:
The other side of the coin (as I think you mentioned) is that many are
not comfortable having LVM handle the mirroring. Are its mirroring
abilities as mature or fast as md? It's certainly not documented as
well at the
For controller, what is the interface on your drives?? SCSI, SAS??
Dell 2950, SAS 6 Host Bus Controller.
Integrated SAS 6/i(base): 4 port SAS controller (does support RAID 0/1)
But I don't know if that is descent hw raid or crap raid...
JD
On 18-Feb-09, at 2:01 AM, John Doe wrote:
For controller, what is the interface on your drives?? SCSI, SAS??
Dell 2950, SAS 6 Host Bus Controller.
Integrated SAS 6/i(base): 4 port SAS controller (does support RAID
0/1)
But I don't know if that is descent hw raid or crap raid...
JD
On Wed, Feb 18, 2009 at 08:13:23AM -0800, dnk wrote:
On 18-Feb-09, at 2:01 AM, John Doe wrote:
For controller, what is the interface on your drives?? SCSI, SAS??
Dell 2950, SAS 6 Host Bus Controller.
Integrated SAS 6/i(base): 4 port SAS controller (does support RAID
0/1)
But I
On 18-Feb-09, at 8:17 AM, Ray Van Dolson wrote:
So this isn't the PERC then? The PERC should be real hardware RAID...
Ray
It is the SAS 6/i.
d
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
on 2-17-2009 1:52 PM dnk spake the following:
Hi there,
I am currently setting up a server that will house my backups (simply
using rsync).
This system has 4 X 500 gb drives and I am looking to raid for max
drive space and data safety. Performance is not so much a concern.
Max size
On 18-Feb-09, at 9:14 AM, Scott Silva wrote:
http://tldp.org/HOWTO/Software-RAID-HOWTO.html
http://www.linuxdevcenter.com/pub/a/linux/2002/12/05/RAID.html
If i am to understand the tutorials right, does one create the raid/
lvm after install? Or do you boot off the disk, use these tools,
Scott Silva wrote:
You can make LVM over raid 1's in Disk Druid, but I don't think it will do
raid 10. And you cannot boot from software raid 5 (yet).
a LVM over several raid 1's is effectively raid10 as LVM will stripe the
volumes across the devices. It would be nice if LVM could do
on 2-18-2009 11:12 AM John R Pierce spake the following:
Scott Silva wrote:
You can make LVM over raid 1's in Disk Druid, but I don't think it will do
raid 10. And you cannot boot from software raid 5 (yet).
a LVM over several raid 1's is effectively raid10 as LVM will stripe the
dnk wrote:
Hi there,
I am currently setting up a server that will house my backups (simply
using rsync).
This system has 4 X 500 gb drives and I am looking to raid for max
drive space and data safety. Performance is not so much a concern.
My experience with software raids in nil,
On Wed, Feb 18, 2009 at 11:12:13AM -0800, John R Pierce wrote:
Scott Silva wrote:
You can make LVM over raid 1's in Disk Druid, but I don't think it will do
raid 10. And you cannot boot from software raid 5 (yet).
a LVM over several raid 1's is effectively raid10 as LVM will stripe
On Wed, 2009-02-18 at 08:13 -0800, dnk wrote:
On 18-Feb-09, at 2:01 AM, John Doe wrote:
For controller, what is the interface on your drives?? SCSI, SAS??
Dell 2950, SAS 6 Host Bus Controller.
Integrated SAS 6/i(base): 4 port SAS controller (does support RAID
0/1)
But I don't
Ray Van Dolson wrote:
Can't Linux LVM do mirroring? I swear I read that it could in the man
page. Never have tried it however and you certainly can't set it up
from disk druid in anaconda.
dunno. the word 'mirror occurs exactly once in the man page for lvm(8)
lvconvert --
On Wed, Feb 18, 2009 at 11:51:55AM -0800, John R Pierce wrote:
Ray Van Dolson wrote:
Can't Linux LVM do mirroring? I swear I read that it could in the man
page. Never have tried it however and you certainly can't set it up
from disk druid in anaconda.
dunno. the word 'mirror
John R Pierce wrote:
Ray Van Dolson wrote:
Can't Linux LVM do mirroring? I swear I read that it could in the man
page. Never have tried it however and you certainly can't set it up
from disk druid in anaconda.
dunno. the word 'mirror occurs exactly once in the man page for lvm(8)
a LVM over several raid 1's is effectively raid10 as LVM will stripe the
volumes across the devices. It would be nice if LVM could do
mirorring too (like LVM on AIX does) and was tighter integrated with the
file system tools (again, like LVM on AIX... grow a LV and it grows the
JFS
On Feb 18, 2009, at 2:29 PM, Ray Van Dolson ra...@bludgeon.org wrote:
On Wed, Feb 18, 2009 at 11:12:13AM -0800, John R Pierce wrote:
Scott Silva wrote:
You can make LVM over raid 1's in Disk Druid, but I don't think it
will do
raid 10. And you cannot boot from software raid 5 (yet).
a
On Wed, Feb 18, 2009 at 06:20:59PM -0500, Ross Walker wrote:
On Feb 18, 2009, at 2:29 PM, Ray Van Dolson ra...@bludgeon.org wrote:
On Wed, Feb 18, 2009 at 11:12:13AM -0800, John R Pierce wrote:
Scott Silva wrote:
You can make LVM over raid 1's in Disk Druid, but I don't think it
will
Hi there,
I am currently setting up a server that will house my backups (simply
using rsync).
This system has 4 X 500 gb drives and I am looking to raid for max
drive space and data safety. Performance is not so much a concern.
My experience with software raids in nil, so some of these may
For controller, what is the interface on your drives?? SCSI, SAS??
John Plemons
dnk wrote:
Hi there,
I am currently setting up a server that will house my backups (simply
using rsync).
This system has 4 X 500 gb drives and I am looking to raid for max
drive space and data safety.
On Tue, Feb 17, 2009 at 01:52:52PM -0800, dnk wrote:
Hi there,
I am currently setting up a server that will house my backups (simply
using rsync).
This system has 4 X 500 gb drives and I am looking to raid for max
drive space and data safety. Performance is not so much a concern.
dnk wrote:
Hi there,
I am currently setting up a server that will house my backups (simply
using rsync).
This system has 4 X 500 gb drives and I am looking to raid for max
drive space and data safety. Performance is not so much a concern.
My experience with software raids in nil, so
On 17-Feb-09, at 2:01 PM, John Plemons wrote:
For controller, what is the interface on your drives?? SCSI, SAS??
John Plemons
Dell 2950, SAS 6 Host Bus Controller.
d
___
CentOS mailing list
CentOS@centos.org
On 17-Feb-09, at 2:08 PM, Rob Kampen wrote:
Linux software raid works fine and I use this recipe
http://linux-raid.osdl.org/index.php/Partitioning_RAID_/_LVM_on_RAID
If you can afford a proper hardware raid controller for raid 6, that
would be of better performance than linux software raid,
Dnk,
I use two drives with linux raid 1 sets for the OS for all my CentOS
machines, drives are cheaper than my rebuild time and hassle.
I actually use 3 partitions mirrored on each: /boot of 100M; swap of 2
time RAM (disk is cheap); 70G as /, then the remainder is extended
partition for lvm -
Would it be best to raid 1 two drives each and LVM them together?
My next question would be about how to do this as I have never done a
linux software raid.
I would do it this way if they are not system disks:
Eg:
sdc + sdd = md0 (raid 1)
sde + sdf = md1 (raid 1)
md0 + md1 = md2 (raid
48 matches
Mail list logo