Re: Software raid VS hardware raid

2013-01-30 Thread Modulok
 My other concern is what happens when one drive goes down if we use
 gmirror? Is it completelly transparent
 and bad drive can be hot swapped while server is running and rebuild
 started?
 I am thinking now about gpt+gmirror (including boot and swap)

 Artem


Yes. In fact, you can test this by unplugging the data or power cable to a
drive while the server is running. I've done this with consumer sata drives
and, so far, not had a problem. The server stays up and running and disk access
is not interrupted. I can then plug in a new disk and add it to the gmirror and
the array rebuilds.

I've not tried this with gpt, so I can't comment there.
-Modulok-
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-30 Thread Artem Kuchin


30.01.2013 1:01, Warren Block:

On Tue, 29 Jan 2013, Artem Kuchin wrote:



29.01.2013 18:57, Warren Block:

On Tue, 29 Jan 2013, Artem Kuchin wrote:

The Handbook chapter on gmirror talks about the problems with GPT 
and GEOM metadata.  In short: right now, they conflict. It's 
possible to mirror GPT partitions, but be aware that if you mirror 
more than one partition on a drive, a rebuild after replacing a 
drive could thrash the heads as mirrors are rebuilt simultaneously.


http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html 





So,
gmirror+GPT=conflict on last sector
GPT+gmirror = hardrive head kill

nice...

So, for no more than 2TB disks the best way to go is GMIRROR of the 
drive +PARTITION on top of it?


GPT partitions should work, just limit it to one mirrored partition 
per drive.


Please, clarify what you mean here.



Or maybe there is a way to instruct gmirror do rebuild only what i 
say (manual rebuild) ?


'gmirror configure -n' ?  Have not tried it.  The trick would be to do 
that before multiple mirrors start rebuilding, which they will as soon 
as geom_mirror.ko is loaded.




As i understand from the man page -n  setup the device not to auto 
rebuild  ever. So, this is probably the thing i want.  I need to setup a 
test system and play with it

a bit.


Artem
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-30 Thread Andrea Venturoli

On 01/28/13 21:43, Artem Kuchin wrote:


I am planning to use mirror configuration of two SATA 7200rpm 2TB disks.


I personally vote for gmirror in this case; I've used it a lot and found 
it very good wrt to both performance and robustness.


You can spend the extra money you spare on the controller buying good 
disks; as someone else pointed out don't get desktop-class ones, but 
24x7 ones.


Just my 2c.

 bye
av.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-30 Thread Artem Kuchin


30.01.2013 18:06, Warren Block:

On Wed, 30 Jan 2013, Artem Kuchin wrote:



30.01.2013 1:01, Warren Block:

On Tue, 29 Jan 2013, Artem Kuchin wrote:



29.01.2013 18:57, Warren Block:

On Tue, 29 Jan 2013, Artem Kuchin wrote:

The Handbook chapter on gmirror talks about the problems with GPT 
and GEOM metadata.  In short: right now, they conflict. It's 
possible to mirror GPT partitions, but be aware that if you mirror 
more than one partition on a drive, a rebuild after replacing a 
drive could thrash the heads as mirrors are rebuilt simultaneously.


http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html 



So,
gmirror+GPT=conflict on last sector
GPT+gmirror = hardrive head kill

nice...

So, for no more than 2TB disks the best way to go is GMIRROR of the 
drive +PARTITION on top of it?


GPT partitions should work, just limit it to one mirrored partition 
per drive.


Please, clarify what you mean here.


If only one GPT partition on a drive is mirrored with another GPT 
partition on another drive, head contention never comes up.  There is 
only one mirror.


It does nearly eliminate the usefulness of GPT partitioning.

Um... and how can i do that if i have a simple mirror with two drives 
and want to mirror everything on them? As i understand i will have at least
bootable, swap and ufs parttions on those drives, that is 3 partitions 
at least.


Artem
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-30 Thread Paul Kraus
On Jan 30, 2013, at 8:10 AM, Andrea Venturoli wrote:

 You can spend the extra money you spare on the controller buying good disks; 
 as someone else pointed out don't get desktop-class ones, but 24x7 ones.

Server Class drives buy you some improvement, but my recent experience with 
Seagate Barracuda ES.2 drives is not that good. I have had 50% of them fail 
within the 5-year warranty period. My disks run 24x7 and I use ZFS under 
FreeBSD 9 so I have not lost any data. I have:

2 x Seagate ES.2 250 GB (one has failed)
4 x Seagate ES.2 1 TB (two have failed)
2 x Hitachi UltraStar 1 TB (pre-WD acquisition), no failures, but they are less 
than 2 years old. They are also noticeably faster than the Seagate ES.2

I just ordered 2 x WD RE4 500 GB, we'll see how those do

I go out of my way to purchase disks with a 5-year warranty, they are still out 
there but you have to look for them.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-30 Thread Artem Kuchin

There seems to be one more advantage to gmirror
If i understood correctly

gmirror label -v -b split -s 2048 data da0 da1 da2

will create a tripple mirror raid 1, that is
triple redundancy, which is hardly available on any hardware raid.

Am i correct here?

Also, does anyone know how to choose split threshold (-s 2048) correctly ?

Artem




___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-30 Thread Warren Block

On Wed, 30 Jan 2013, Artem Kuchin wrote:


30.01.2013 18:06, Warren Block:


GPT partitions should work, just limit it to one mirrored partition per 
drive.


Please, clarify what you mean here.


If only one GPT partition on a drive is mirrored with another GPT partition 
on another drive, head contention never comes up.  There is only one 
mirror.


It does nearly eliminate the usefulness of GPT partitioning.

Um... and how can i do that if i have a simple mirror with two drives and 
want to mirror everything on them? As i understand i will have at least
bootable, swap and ufs parttions on those drives, that is 3 partitions at 
least.


If you want to use the same drive for booting, it's possible.  Create 
all three partitions on both drives manually.  Then mirror the 
freebsd-ufs partition only.  The contents of the freebsd-boot partition 
don't change often, and swap does not have to be mirrored.


Not that it's easy or convenient, but it's an option.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-30 Thread Paul Kraus
On Jan 30, 2013, at 10:22 AM, Warren Block wrote:

 If you want to use the same drive for booting, it's possible.  Create all 
 three partitions on both drives manually.  Then mirror the freebsd-ufs 
 partition only.  The contents of the freebsd-boot partition don't change 
 often, and swap does not have to be mirrored.

Note that if you do NOT mirror SWAP, then in the event of a disk 
failure you will most likely crash when the system tries to swap in some data 
from the failed drive. If you mirror swap then you do not risk a crash due to 
missing swap data.

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-30 Thread Artem Kuchin


30.01.2013 19:28, Paul Kraus:

On Jan 30, 2013, at 10:22 AM, Warren Block wrote:


If you want to use the same drive for booting, it's possible.  Create all three 
partitions on both drives manually.  Then mirror the freebsd-ufs partition 
only.  The contents of the freebsd-boot partition don't change often, and swap 
does not have to be mirrored.

Note that if you do NOT mirror SWAP, then in the event of a disk 
failure you will most likely crash when the system tries to swap in some data 
from the failed drive. If you mirror swap then you do not risk a crash due to 
missing swap data.



yes, that's what i wanted to say.
Also, not being able to boot if first disk has some error in boot 
section or just strangly dead is not an option too. However, i was just 
thinking,
if i use gmirror then bios does not know anything about it. I may set 
both harddisk as boot disk, but if first disk is brain damaged then bios 
may just stuck
trying to boot from it and will not pass boot attempt to the second 
disk. I don't know, it depends on bios of course. But this seems to be a 
disadvantage to

a software raid.

Artem



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-30 Thread Warren Block

On Wed, 30 Jan 2013, Artem Kuchin wrote:

Also, not being able to boot if first disk has some error in boot 
section or just strangly dead is not an option too. However, i was 
just thinking, if i use gmirror then bios does not know anything about 
it. I may set both harddisk as boot disk, but if first disk is brain 
damaged then bios may just stuck trying to boot from it and will not 
pass boot attempt to the second disk. I don't know, it depends on bios 
of course. But this seems to be a disadvantage to a software raid.


That's true.  The similar situation with hardware RAID is when the 
controller fails.  The metadata is probably specific to that 
manufacturer and maybe to that model of controller.  It's a good idea to 
get spares, because as Murphy is my witness, in an emergency that 
controller will not be available in the same town, district, country, or 
continent.  More likely it will have been long discontinued, with no 
data migration path.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-29 Thread Artem Kuchin


29.01.2013 11:54, Michael Powell:

Artem Kuchin wrote:


I guess what I'm trying to point out is that low performance wrt software
RAID will stem from other things besides just simply consuming a few CPU
cycles. Today's CPUs have the cycles to spare.  I've been using gmirror for
RAID 1 mirrors for a few years now and am happy with this. I have had a few
old drives die and the servers stayed up and online. This allowed me to
defer the actual drive replacement and not have to drop everything and fight
fire.



Thank you everyone for replying.

I realize that many other things affect the performance, not only the 
CPU power. For example,
disk IO kernel multithreading is one of the things. But i guess in FBSD 
9 it is more or less solved.
The server is going to be a web server with many sites and with mysql 
running on it. Nothing really really
heavy. Currently with run all this on our own server with 8 cores and 
16GB ram and 3ware raid1
and cpu load is about 5% :) Everything is quick and responsive. I hope 
to see the same on a software raid.


I really don't want to deploy ZFS on a new server where all these site 
need to migrate because i am kind of
don't fix it if it is not broken kind of guy. 
UFS+journaling+softupdates served us well for years and snapshots

are available on ufs too.

My other concern is what happens when one drive goes down if we use 
gmirror? Is it completelly transparent
and bad drive can be hot swapped while server is running and rebuild 
started?

I am thinking now about gpt+gmirror (including boot and swap)

Artem

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-29 Thread Michael Powell
Artem Kuchin wrote:

[snip]
 The server is going to be a web server with many sites and with mysql
 running on it. Nothing really really
 heavy. Currently with run all this on our own server with 8 cores and
 16GB ram and 3ware raid1
 and cpu load is about 5% :) Everything is quick and responsive. I hope
 to see the same on a software raid.

The controller would be a slight concern. But for what you've described 
doing I doubt it will be a big deal. The 3Ware may have a faster processor 
on it than say a generic onboard built-in. But since all we're talking here 
is a RAID 1 mirror my guess is it may not be a big enough difference to see. 
Writes will be just as if you are writing to 1 drive, reads will be faster. 
Maybe that 5% cpu load turns into 6% or 7%.
 
 I really don't want to deploy ZFS on a new server where all these site
 need to migrate because i am kind of
 don't fix it if it is not broken kind of guy.
 UFS+journaling+softupdates served us well for years and snapshots
 are available on ufs too.

I understand; I've only played around with ZFS some on Solaris. I may move 
in that direction some day, but for now
 
 My other concern is what happens when one drive goes down if we use
 gmirror? Is it completelly transparent
 and bad drive can be hot swapped while server is running and rebuild
 started?
 I am thinking now about gpt+gmirror (including boot and swap)

I've never actually hot-swapped one but I can't see any reason why not. You 
can't use the gmirror remove directive when a drive has failed, but you do a 
gmirror forget device , swap it, then just do gmirror insert device to 
insert the replaced drive into the mirror. When everything is working as it 
should gmirror is mostly 'automatic', e.g. after the insert the rebuild just 
starts. Main thing I appreciated about this is the server stayed up and 
online after one drive died. 

My two servers at home are my testbeds to test out things first before doing 
stuff to the ones at work. I just installed both to 9.1. The difference now is 
I've used GPT (gpart) and this is new to me. Previously everything was 
always fdisk and disklabel. Both these machines are setup on one drive at 
this point and I haven't yet gotten into the mirroring yet.  

With the old fdisk/disklabel it was simple to just mirror the entire drive 
itself (slice). The other approach is to mirror partitions. I think I may 
need to do this as I think this is the way you have to proceed in order to 
avoid having gpt and gmirror both trying to claim the last sector on the 
drive (metadata storage). 

-Mike


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-29 Thread Warren Block

On Tue, 29 Jan 2013, Artem Kuchin wrote:

My other concern is what happens when one drive goes down if we use gmirror? 
Is it completelly transparent

and bad drive can be hot swapped while server is running and rebuild started?
I am thinking now about gpt+gmirror (including boot and swap)


As far a gmirror is concerned, yes, drives can be removed and new drives 
inserted while the mirror is running.  Hot swap is more of an issue with 
the hardware.  I have not tried it with SATA drives, although I think it 
should work.


The Handbook chapter on gmirror talks about the problems with GPT and 
GEOM metadata.  In short: right now, they conflict.  It's possible to 
mirror GPT partitions, but be aware that if you mirror more than one 
partition on a drive, a rebuild after replacing a drive could thrash the 
heads as mirrors are rebuilt simultaneously.


http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-29 Thread Artem Kuchin


29.01.2013 18:57, Warren Block:

On Tue, 29 Jan 2013, Artem Kuchin wrote:

The Handbook chapter on gmirror talks about the problems with GPT and 
GEOM metadata.  In short: right now, they conflict.  It's possible to 
mirror GPT partitions, but be aware that if you mirror more than one 
partition on a drive, a rebuild after replacing a drive could thrash 
the heads as mirrors are rebuilt simultaneously.


http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html 






So,
gmirror+GPT=conflict on last sector
GPT+gmirror = hardrive head kill

nice...

So, for no more than 2TB disks the best way to go is GMIRROR of the 
drive +PARTITION on top of it?
Or maybe there is a way to instruct gmirror do rebuild only what i say 
(manual rebuild) ?


Artem




___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-29 Thread Mark Felder
On Tue, 29 Jan 2013 08:57:31 -0600, Warren Block wbl...@wonkity.com  
wrote:


As far a gmirror is concerned, yes, drives can be removed and new drives  
inserted while the mirror is running.  Hot swap is more of an issue with  
the hardware.  I have not tried it with SATA drives, although I think it  
should work.
 The Handbook chapter on gmirror talks about the problems with GPT and  
GEOM metadata.  In short: right now, they conflict.  It's possible to  
mirror GPT partitions, but be aware that if you mirror more than one  
partition on a drive, a rebuild after replacing a drive could thrash the  
heads as mirrors are rebuilt simultaneously.
  
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html


Why isn't gmirror more intelligent? I hate to use Linux as an example, but  
mdadm won't simultaneously rebuild multiple RAID sets if they use the same  
physical providers to prevent this. Could this be added as a feature? Even  
a sysctl toggle?

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-29 Thread Warren Block

On Tue, 29 Jan 2013, Artem Kuchin wrote:



29.01.2013 18:57, Warren Block:

On Tue, 29 Jan 2013, Artem Kuchin wrote:

The Handbook chapter on gmirror talks about the problems with GPT and GEOM 
metadata.  In short: right now, they conflict.  It's possible to mirror GPT 
partitions, but be aware that if you mirror more than one partition on a 
drive, a rebuild after replacing a drive could thrash the heads as mirrors 
are rebuilt simultaneously.


http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html 





So,
gmirror+GPT=conflict on last sector
GPT+gmirror = hardrive head kill

nice...

So, for no more than 2TB disks the best way to go is GMIRROR of the drive 
+PARTITION on top of it?


GPT partitions should work, just limit it to one mirrored partition per 
drive.


Or maybe there is a way to instruct gmirror do rebuild only what i say 
(manual rebuild) ?


'gmirror configure -n' ?  Have not tried it.  The trick would be to do 
that before multiple mirrors start rebuilding, which they will as soon 
as geom_mirror.ko is loaded.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Software raid VS hardware raid

2013-01-28 Thread Artem Kuchin

Hello!

I have to made a decision on choosing a dedicated server.
The problem i see is that while i can find very affordable and good 
options they do not
provide hardware raid or even if they do it is not the best hardware for 
freebsd.

The server base conf is 8core 32gb ram 2.8+ ghz.
So, maybe someone has personal experience with both worlds and can tell 
if it
really matters in such configuration if i go for software raid. What are 
the benefits
and what are the negatives of software raid? How much is the performance 
penalty?
I am planning to use mirror configuration of two SATA 7200rpm 2TB disks. 
Nothing fancy.

File system planned is UFS with journaling.

Artem

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-28 Thread Per olof Ljungmark
On 01/28/13 21:43, Artem Kuchin wrote:
 Hello!
 
 I have to made a decision on choosing a dedicated server.
 The problem i see is that while i can find very affordable and good
 options they do not
 provide hardware raid or even if they do it is not the best hardware for
 freebsd.
 The server base conf is 8core 32gb ram 2.8+ ghz.
 So, maybe someone has personal experience with both worlds and can tell
 if it
 really matters in such configuration if i go for software raid. What are
 the benefits
 and what are the negatives of software raid? How much is the performance
 penalty?
 I am planning to use mirror configuration of two SATA 7200rpm 2TB disks.
 Nothing fancy.
 File system planned is UFS with journaling.
 

I won't delve into detail here but if the data is important HW RAID is
where you want to be. Perhaps you could give us a little more details
about what the purpose of the server is? Mission-critical or low cost?
Those two tends to be mutually exclusive...

We are HP-only but have good experience from LSI as well.

Just my $0.02.

//per
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-28 Thread Daniel Feenberg



On Mon, 28 Jan 2013, Per olof Ljungmark wrote:


On 01/28/13 21:43, Artem Kuchin wrote:

Hello!

I have to made a decision on choosing a dedicated server.
The problem i see is that while i can find very affordable and good
options they do not
provide hardware raid or even if they do it is not the best hardware for
freebsd.
The server base conf is 8core 32gb ram 2.8+ ghz.
So, maybe someone has personal experience with both worlds and can tell
if it
really matters in such configuration if i go for software raid. What are
the benefits
and what are the negatives of software raid? How much is the performance
penalty?
I am planning to use mirror configuration of two SATA 7200rpm 2TB disks.
Nothing fancy.
File system planned is UFS with journaling.



I won't delve into detail here but if the data is important HW RAID is
where you want to be. Perhaps you could give us a little more details


A problem with HW RAID is that if the controller breaks, you need to get 
an identical controller to replace it, or the data will be lost. With 
software raid, you can read the data on any machine that will boot 
FreeBSD. That is a great convenience compared to searching eBay for an 
obsolete controller with the proper rev level.


We haven't noticed any speed disadvantage on modern multi-core hardware 
and RAID 1. The advantages of HW raid escape me - I understand that 
years ago it provided OS independence and reduced CPU load, but it no 
longer provides the former, and with 8 cores do you need the latter while 
waiting for a disk platter to spin?


ZFS is worthwhile, too, especially since you have a good amount of memory. 
That would give you snapshots and some other desirable features, such as 
background scanning for defects that UFS doesn't have.



about what the purpose of the server is? Mission-critical or low cost?
Those two tends to be mutually exclusive...


Surely the presence of SATA drives shows that low cost is essential.

Mirroring and ZFS provide very important advantages. HW raid seems to fill 
a much needed gap (apologies to Brian Kernigan).


daniel feenberg




We are HP-only but have good experience from LSI as well.

Just my $0.02.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-28 Thread Paul Kraus
On Jan 28, 2013, at 3:43 PM, Artem Kuchin wrote:

 I have to made a decision on choosing a dedicated server.
 The problem i see is that while i can find very affordable and good options 
 they do not
 provide hardware raid or even if they do it is not the best hardware for 
 freebsd.

I prefer SW RAID, specifically ZFS, for two very large reasons:

1) Visibility: From the OS layer you have very good visibility into the health 
of the RAID set and the underlying drives. All of the lower end HW RAID 
solutions I have seen require proprietary software to manage the RAID 
configuration, usually from the physical system's BIOS layer. Finding good OS 
layer software to monitor the RAID and the drives has been very painful. If you 
don't know you have a failure, then you can't do anything about it and when you 
have a second failure you lose data. Running a HW RAID system and not being 
able to issue a simple command from the OS and see the status of the RAID 
scares me.

2) Error Detection and Correction: HW RAID relies on the drives to report read 
and write errors. With UNCORRECTABLE error rates of 10^-14 and 10^-15 and LARGE 
(1 TB plus) drives you are almost guaranteed to statistically run into 
UNCORRECTABLE errors over the life of a typical drive. ZFS has end to end 
checksums and can detect a single bad bit from a drive, if the set is redundant 
it can recreate the correct data and re-write it, effectively correcting the 
bad data on disk.

NOTE: Larger, more expensive HW RAID systems address both of the above issues, 
but at a much higher cost in terms of money and management overhead.

DISCLAIMER: I have been managing mission critical, cannot afford to lose it 
data under ZFS for over 5 years, with no loss of data (even with some horribly 
unreliable low cost HW RAID systems under the ZFS layer... if we had not used 
ZFS we would have lost data multiple times).  

--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Software raid VS hardware raid

2013-01-28 Thread Michael Powell
Artem Kuchin wrote:

 Hello!
 
 I have to made a decision on choosing a dedicated server.
 The problem i see is that while i can find very affordable and good
 options they do not
 provide hardware raid or even if they do it is not the best hardware for
 freebsd.
 The server base conf is 8core 32gb ram 2.8+ ghz.
 So, maybe someone has personal experience with both worlds and can tell
 if it
 really matters in such configuration if i go for software raid. What are
 the benefits
 and what are the negatives of software raid? How much is the performance
 penalty?
 I am planning to use mirror configuration of two SATA 7200rpm 2TB disks.
 Nothing fancy.
 File system planned is UFS with journaling.

I can't say for sure exactly what's best for your needs, however, please 
allow me to toss out some very generic tidbits which may aid you in some 
way.

Historically back when RAID was new, hardware controllers were the only way 
to go. Back then I would never look at software RAID for a server machine. 
Best to offload as much work away from the CPU as possible to free it up for 
running the OS. What has changed is the amount of raw horsepower available 
from modern-day processors as compared to when RAID first came out. On the 
multi-core monster CPUs of today software RAID is a perfectly viable 
consideration because there are CPU cycles to spare, so the performance 
penalty is less now than it once was.

Having said that, there are several other considerations to keep in mind as 
well. The type of RAID required matters. If you want/need RAID 5/6 it is 
definitely better to go with hardware RAID because of the horsepower 
required to do the XOR parity generation. You would want RAID 5/6 running on 
a hardware controller and not on the CPU. On the other hand, RAID 0, 1, and 
10 are fine candidates for software RAID.

One thing I've noticed that seems to somewhat get lost in this discussion  
is equating software-based RAID with not needing to spend money on the 
expensive RAID controller. At first glance it does seem like quite a waste 
to spend hundreds of dollars on a really fast RAID controller and then turn 
all its functionality off and just use it JBOD style. If you truly want 
performance you still need the processing power of the hardware chip on the 
(expensive) controller. Most central to this is I/Os per second. This 
matters more to some workloads than others, with being a database server 
probably at the top of the list where I/Os per second is king. The better 
the chip on the controller card the more I/Os per second.

Another thing that matters less wrt to server hardware is the third kind of 
RAID known as fake or pseudo RAID. This is mostly found on desktop PC 
motherboards and some low-end (cheap) hardware cards. There is a config in 
the BIOS to set up so-called RAID, but it is only half of the matter - the 
other half is in the driver. FreeBSD does indeed have support for some of 
these fake RAID things but I stay far far away from them. Either go 
hardware or pure software only - the fakeraid is crap. 

Another thing I'd warn you about is the drives themselves. Take a look:

http://wdc.custhelp.com/app/answers/detail/a_id/1397

Many people get very lucky much of the time and don't experience problems 
with this. Using drives designed for desktop PCs with RAID can be prone to 
problem. Drives designed for servers are more expensive, but I've always 
felt it is better to put server drives in servers.   :-) 

In terms of a 'performance penalty' what you will find is it gets shifted 
away from just losing a few CPU cycles into other areas. If the drives are 
Advanced Format 4k sector critters and they aren't properly aligned in the 
partitioning phase of set up performance will take a hit. If the controller 
chip they are hooked up to is slow, then the entire drive subsystem will 
suffer. Another thing you will find that will surface as a problem area is 
the shift away from the old style DOS MBR scheme and towards GPT. Software 
RAID (and indeed hardware controllers too) store their metadata at the end 
of the drive and needs to be outside the file system. The problem arises 
when both the software raid and the GPT partitioning try to store metadata to 
the same location and collide. Just knowing about this in advance and 
spending some quality reading time about it prior to trying to set up the 
box will help greatly. Plenty has been written (even in this list) about 
this subject by people smarter than me so the info you need is out there, 
albeit it can be confusing at first. 

I guess what I'm trying to point out is that low performance wrt software 
RAID will stem from other things besides just simply consuming a few CPU 
cycles. Today's CPUs have the cycles to spare.  I've been using gmirror for 
RAID 1 mirrors for a few years now and am happy with this. I have had a few 
old drives die and the servers stayed up and online. This allowed me to 
defer the actual drive replacement and not have