Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-20 Thread Tom Worster
On 7/16/09 6:12 PM, Maxim Khitrov mkhit...@gmail.com wrote:

 
 I'd love to hear about any test results you may get comparing software with
 hardware raid.
 
 I received the hardware yesterday. There was a last minute change due
 to cost. Instead of getting 4x 2TB drives I opted for 6x 1TB. This
 limits my future expansion a bit, but that may be a few years down the
 line. On the plus side, I can get the 4TB of RAID6 that I originally
 planned for and the performance should be better because of additional
 disks in the array.
 
 Sometime next week I'll install FreeBSD 8 and will then be able to run
 a few benchmarks. After that I'll configure software RAID and repeat
 the process. Are there any specific tests that you guys would like me
 to run? Sequential read/write tests using dd are a given, beyond that
 I'm not familiar with any ports under benchmarks/, so if you know of
 anything good, tell me.

sorry, i can't help. i've never done any benchmarking.

if performance using zfs raid were good relative to using the dedicated raid
controller then the possibility of improving system availability by
eliminating that non-redundant sub-system might exist. unfortunately i don't
think the comparison makes sense, at least with sas, because all the
multi-port sas controller chips, iirc, are also raid controllers. 


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-16 Thread Maxim Khitrov
On Mon, Jul 13, 2009 at 4:08 PM, Richard Mahlerweinmahle...@yahoo.com wrote:
  Just as a question: how ARE you planning on backing
 this beast up?  While I don't want to sound like a
 worry-wort, I have had odd things happen at the worst of
 times.  RAID cards fail, power supplies let out the magic
 smoke, users delete items they really want back... *sigh*

 Rsync over ssh to another server. Most of the data stored
 will never
 change after the first upload. A daily rsync run will
 transfer one or
 two gigs at the most. History is not required for the same
 reason;
 this is an append-only storage for the most part. A backup
 for the
 previous day is all that is required, but I will keep a
 weekly backup
 as well until I start running out of space.

  A bit of reading shows that ZFS, if it's stable
 enough, has some really great features that would be nice on
 such a large pile o' drives.
 
  See http://wiki.freebsd.org/ZFSQuickStartGuide
 
  I guess the last question I'll ask (as any more may
 uncover my ignorance) is if you need to use hardware RAID at
 all?  It seems both UFS2 and ZFS can do software RAID
 which seems to be quite reasonable with respect to
 performance and in many ways seems to be more robust since
 it is a bit more portable (no specialized hardware).

 I've thought about this one a lot. In my case, the hard
 drives are in
 a separate enclosure from the server and the two had to be
 connected
 via SAS cables. The 9690SA-8E card was the best choice I
 could find
 for accessing an external SAS enclosure with support for 8
 drives.

 I could configure it in JBOD mode and then use software to
 create a
 RAID array. In fact, I will likely do this to compare
 performance of a
 hardware vs. software RAID5 solution. The ZFS RAID-Z option
 does not
 appeal to me, because the read performance does not benefit
 from
 additional drives, and I don't think RAID6 is available in
 software.
 For those reasons I'm leaning toward a hardware
 implementation.

 If I go the hardware route, I'll try to purchase a backup
 controller
 in a year or two. :)

  There are others who may respond with better
 information on that front.  I've been a strong
 proponent of hardware RAID, but have recently begun to
 realize many of the reasons for that are only of limited
 validity now.

 Agreed, and many simple RAID setups (0, 1, 10) will give
 you much
 better performance in software. In my case, I have to have
 some piece
 of hardware just to get to the drives, and I'm guessing
 that hardware
 RAID5/6 will be faster than the closest software
 equivalent. Maybe my
 tests will convince me otherwise.

 - Max

 I'd love to hear about any test results you may get comparing software with 
 hardware raid.

I received the hardware yesterday. There was a last minute change due
to cost. Instead of getting 4x 2TB drives I opted for 6x 1TB. This
limits my future expansion a bit, but that may be a few years down the
line. On the plus side, I can get the 4TB of RAID6 that I originally
planned for and the performance should be better because of additional
disks in the array.

Sometime next week I'll install FreeBSD 8 and will then be able to run
a few benchmarks. After that I'll configure software RAID and repeat
the process. Are there any specific tests that you guys would like me
to run? Sequential read/write tests using dd are a given, beyond that
I'm not familiar with any ports under benchmarks/, so if you know of
anything good, tell me.

- Max
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-14 Thread Matthew Seaman

Richard Mahlerwein wrote:


With 4 drives, you could get much, much higher performance out of
RAID10 (which is alternatively called RAID0+1 or RAID1+0 depending on
the manufacturer


Uh -- no.  RAID10 and RAID0+1 are superficially similar but quite different
things.  The main differentiator is resilience to disk failure. RAID10 takes
the raw disks in pairs, creates a mirror across each pair, and then stripes
across all the sets of mirrors.  RAID0+1 divides the raw disks into two equal
sets, constructs stripes across each set of disks, and then mirrors the
two stripes.

Read/Write performance is similar in either case: both perform well for 
the sort of small randomly distributed IO operations you'ld get when eg.

running a RDBMS.  However, consider what happens if you get a disk failure.
In the RAID10 case *one* of your N/2 mirrors is degraded but the other N-1
drives in the array operate as normal.  In the RAID0+1 case, one of the
2 stripes is immediately out of action and the whole IO load is carried by
the N/2 drives in the other stripe.

Now consider what happens if a second drive should fail.  In the RAID10
case, you're still up and running so long as the failed drive is one of
the N-2 disks that aren't the mirror pair of the 1st failed drive.
In the RAID0+1 case, you're out of action if the 2nd disk to fail is one
of the N/2 drives from the working stripe.  Or in other words, if two
random disks fail in a RAID10, chances are the RAID will still work.  If
two arbitrarily selected disks fail in a RAID0+1 chances are basically
even that the whole RAID is out of action[*].

I don't think I've ever seen a manufacturer say RAID1+0 instead of RAID10,
but I suppose all things are possible.  My impression was that the 0+1 
terminology was specifically invented to make it more visually distinctive

-- ie to prevent confusion between '01' and '10'.

Cheers,

Matthew

[*] Astute students of probability will point out that this really only
makes a difference for N  4, and for N=4 chances are evens either way 
that failure of two drives would take out the RAID.


--
Dr Matthew J Seaman MA, D.Phil.   7 Priory Courtyard
 Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate
 Kent, CT11 9PW



signature.asc
Description: OpenPGP digital signature


Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-14 Thread Adam Townsend
 A bit of reading shows that ZFS, if it's stable enough, has some
 really great features that would be nice on such a large pile o'
 drives.

 See http://wiki.freebsd.org/ZFSQuickStartGuide

 I guess the last question I'll ask (as any more may uncover my
 ignorance) is if you need to use hardware RAID at all?  It seems
 both UFS2 and ZFS can do software RAID which seems to be quite
 reasonable with respect to performance and in many ways seems to be
 more robust since it is a bit more portable (no specialized
 hardware).

 I've thought about this one a lot. In my case, the hard drives are in
 a separate enclosure from the server and the two had to be connected
 via SAS cables. The 9690SA-8E card was the best choice I could find
 for accessing an external SAS enclosure with support for 8 drives.

 I could configure it in JBOD mode and then use software to create a
 RAID array. In fact, I will likely do this to compare performance of a
 hardware vs. software RAID5 solution. The ZFS RAID-Z option does not
 appeal to me, because the read performance does not benefit from
 additional drives, and I don't think RAID6 is available in software.
 For those reasons I'm leaning toward a hardware implementation.



 Hi Maxim,

 RAID-Z2 is the RAID6 double parity option in ZFS.


 gr
 Arno


I'm planning on doing something like this once I get 2 more 1TB
drives.  I'm going to try out a zfs RAID-Z not RAID-Z2, but yeah.
I've been around openSolaris' docs on zfs  it seems to be really
robust, you can export it on one OS and import it on another (incase
your root dies, or you want to migrate your disks to another box), you
can take snapshots which are stored on the drive, but I'm sure you
could send those files somewhere to be backed up.  And if you have
really important files you can create multiple copies of them
automatically with ZFS.  If you set it up with multiple vdevs, you can
get a lot more speed out of disk I/O as well, because if you have like
2 raidz vdevs, it stripes them, so you can pull data faster from both.
 I can't remember if it was on this or another list, but there was a
great discussion about the performance abilities/issues of zfs  they
had some good points like not using more than 8 drives per vdev 
such. If you search this, the hardware list, or hackers list I'm sure it'll
pop up.

Try it out both ways and see which is best.  there are pro's  con's
to both, but it all depends on what you need for your solution.

Cheers,
Bucky

...whoops sent this as a reply to the digest w/o changing the name.  I hope
it finds the right person now.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-14 Thread Richard Mahlerwein

--- On Tue, 7/14/09, Matthew Seaman m.sea...@infracaninophile.co.uk wrote:

 From: Matthew Seaman m.sea...@infracaninophile.co.uk
 Subject: Re: ZFS or UFS for 4TB hardware RAID6?
 To: mahle...@yahoo.com
 Cc: Free BSD Questions list freebsd-questions@freebsd.org
 Date: Tuesday, July 14, 2009, 4:23 AM
 Richard Mahlerwein wrote:
 
  With 4 drives, you could get much, much higher
 performance out of
  RAID10 (which is alternatively called RAID0+1 or
 RAID1+0 depending on
  the manufacturer
 
 Uh -- no.  RAID10 and RAID0+1 are superficially
 similar but quite different
 things.  The main differentiator is resilience to disk
 failure. RAID10 takes
 the raw disks in pairs, creates a mirror across each pair,
 and then stripes
 across all the sets of mirrors.  RAID0+1 divides the
 raw disks into two equal
 sets, constructs stripes across each set of disks, and then
 mirrors the
 two stripes.
 
 Read/Write performance is similar in either case: both
 perform well for the sort of small randomly distributed IO
 operations you'ld get when eg.
 running a RDBMS.  However, consider what happens if
 you get a disk failure.
 In the RAID10 case *one* of your N/2 mirrors is degraded
 but the other N-1
 drives in the array operate as normal.  In the RAID0+1
 case, one of the
 2 stripes is immediately out of action and the whole IO
 load is carried by
 the N/2 drives in the other stripe.
 
 Now consider what happens if a second drive should
 fail.  In the RAID10
 case, you're still up and running so long as the failed
 drive is one of
 the N-2 disks that aren't the mirror pair of the 1st failed
 drive.
 In the RAID0+1 case, you're out of action if the 2nd disk
 to fail is one
 of the N/2 drives from the working stripe.  Or in
 other words, if two
 random disks fail in a RAID10, chances are the RAID will
 still work.  If
 two arbitrarily selected disks fail in a RAID0+1 chances
 are basically
 even that the whole RAID is out of action[*].
 
 I don't think I've ever seen a manufacturer say RAID1+0
 instead of RAID10,
 but I suppose all things are possible.  My impression
 was that the 0+1 terminology was specifically invented to
 make it more visually distinctive
 -- ie to prevent confusion between '01' and '10'.
 
     Cheers,
 
     Matthew
 
 [*] Astute students of probability will point out that this
 really only
 makes a difference for N  4, and for N=4 chances are
 evens either way that failure of two drives would take out
 the RAID.
 
 -- Dr Matthew J Seaman MA, D.Phil.     
              7
 Priory Courtyard
                
                
              
    Flat 3
 PGP: http://www.infracaninophile.co.uk/pgpkey 
    Ramsgate
                
                
              
    Kent, CT11 9PW
 

--- On Tue, 7/14/09, Matthew Seaman m.sea...@infracaninophile.co.uk wrote:

 From: Matthew Seaman m.sea...@infracaninophile.co.uk
 Subject: Re: ZFS or UFS for 4TB hardware RAID6?
 To: mahle...@yahoo.com
 Cc: Free BSD Questions list freebsd-questions@freebsd.org
 Date: Tuesday, July 14, 2009, 4:23 AM
 Richard Mahlerwein wrote:
 
  With 4 drives, you could get much, much higher
 performance out of
  RAID10 (which is alternatively called RAID0+1 or
 RAID1+0 depending on
  the manufacturer
 
 Uh -- no.  RAID10 and RAID0+1 are superficially
 similar but quite different
 things.  The main differentiator is resilience to disk
 failure. RAID10 takes
 the raw disks in pairs, creates a mirror across each pair,
 and then stripes
 across all the sets of mirrors.  RAID0+1 divides the
 raw disks into two equal
 sets, constructs stripes across each set of disks, and then
 mirrors the
 two stripes.
 
 Read/Write performance is similar in either case: both
 perform well for the sort of small randomly distributed IO
 operations you'ld get when eg.
 running a RDBMS.  However, consider what happens if
 you get a disk failure.
 In the RAID10 case *one* of your N/2 mirrors is degraded
 but the other N-1
 drives in the array operate as normal.  In the RAID0+1
 case, one of the
 2 stripes is immediately out of action and the whole IO
 load is carried by
 the N/2 drives in the other stripe.
 
 Now consider what happens if a second drive should
 fail.  In the RAID10
 case, you're still up and running so long as the failed
 drive is one of
 the N-2 disks that aren't the mirror pair of the 1st failed
 drive.
 In the RAID0+1 case, you're out of action if the 2nd disk
 to fail is one
 of the N/2 drives from the working stripe.  Or in
 other words, if two
 random disks fail in a RAID10, chances are the RAID will
 still work.  If
 two arbitrarily selected disks fail in a RAID0+1 chances
 are basically
 even that the whole RAID is out of action[*].
 
 I don't think I've ever seen a manufacturer say RAID1+0
 instead of RAID10,
 but I suppose all things are possible.  My impression
 was that the 0+1 terminology was specifically invented to
 make it more visually distinctive
 -- ie to prevent confusion between '01' and '10'.
 
 Cheers,
 
 Matthew

Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-13 Thread Richard Mahlerwein

--- On Sun, 7/12/09, Maxim Khitrov mkhit...@gmail.com wrote:

 From: Maxim Khitrov mkhit...@gmail.com
 Subject: ZFS or UFS for 4TB hardware RAID6?
 To: Free BSD Questions list freebsd-questions@freebsd.org
 Date: Sunday, July 12, 2009, 11:47 PM
 Hello all,
 
 I'm about to build a new file server using 3ware 9690SA-8E
 controller
 and 4x Western Digital RE4-GP 2TB drives in RAID6. It is
 likely to
 grow in the future up to 10TB. I may use FreeBSD 8 on this
 one, since
 the release will likely be made by the time this server
 goes into
 production. The question is a simple one - I have no
 experience with
 ZFS and so wanted to ask for recommendations of that versus
 UFS2. How
 stable is the implementation and does it offer any benefits
 in my
 setup (described below)?
 
 All of the RAID6 space will only be used for file storage,
 accessible
 by network using NFS and SMB. It may be split into
 separate
 partitions, but most likely the entire array will be one
 giant storage
 area that is expanded every time another hard drive is
 added. The OS
 and all installed apps will be on a separate software RAID1
 array.
 
 Given that security is more important than performance,
 what would be
 your recommended setup and why?
 
 - Max

Your mileage may vary, but...

I would investigate either using more spindles if you want to stick to RAID6, 
or perhaps using another RAID level if you will be with 4 drives for a while.  
The reasoning is that there's an overhead with RAID 6 - parity blocks are 
written to 2 disks, so in a 4 drive combination you have 2 drives with data and 
2 with parity.  

With 4 drives, you could get much, much higher performance out of RAID10 (which 
is alternatively called RAID0+1 or RAID1+0 depending on the manufacturer and on 
how accurate they wish to be, and on how they actually implemented it, too). 
This would also mean 2 usable drives, as well, so you'd have the same space 
available in RAID10 as your proposed RAID6.  

I would confirm you can, on the fly, convert from RAID10 to RAID6 after you add 
more drives.  If you can not, then by all means stick with RAID6 now!

With 4 1 TB drives (for simpler examples)
RAID5 = 3 TB available, 1 TB worth used in parity.  Fast reads, slow writes. 
RAID6 = 2 TB available, 2 TB worth used in parity.  Moderately fast reads, 
slow writes.
RAID10 = 2 TB available, 2TB in duplicate copies (easier work than parity 
calculations).  Very fast reads, moderately fast writes.

When you switch to, say, 8 drives, the numbers start to change a bit.
RAID5 = 7TB available, 1 lost.
RAID6 = 6TB available, 2 lost.
RAID10 = 4TB available, 4 lost.



  
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-13 Thread Richard Mahlerwein

--- On Mon, 7/13/09, Richard Mahlerwein mahle...@yahoo.com wrote:

 From: Richard Mahlerwein mahle...@yahoo.com
 Subject: Re: ZFS or UFS for 4TB hardware RAID6?
 To: Free BSD Questions list freebsd-questions@freebsd.org
 Date: Monday, July 13, 2009, 1:29 PM
 --- On Sun, 7/12/09, Maxim Khitrov
 mkhit...@gmail.com
 wrote:
 
  From: Maxim Khitrov mkhit...@gmail.com
  Subject: ZFS or UFS for 4TB hardware RAID6?
  To: Free BSD Questions list freebsd-questions@freebsd.org
  Date: Sunday, July 12, 2009, 11:47 PM
  Hello all,
  
  I'm about to build a new file server using 3ware
 9690SA-8E
  controller
  and 4x Western Digital RE4-GP 2TB drives in RAID6. It
 is
  likely to
  grow in the future up to 10TB. I may use FreeBSD 8 on
 this
  one, since
  the release will likely be made by the time this
 server
  goes into
  production. The question is a simple one - I have no
  experience with
  ZFS and so wanted to ask for recommendations of that
 versus
  UFS2. How
  stable is the implementation and does it offer any
 benefits
  in my
  setup (described below)?
  
  All of the RAID6 space will only be used for file
 storage,
  accessible
  by network using NFS and SMB. It may be split into
  separate
  partitions, but most likely the entire array will be
 one
  giant storage
  area that is expanded every time another hard drive
 is
  added. The OS
  and all installed apps will be on a separate software
 RAID1
  array.
  
  Given that security is more important than
 performance,
  what would be
  your recommended setup and why?
  
  - Max
 
 Your mileage may vary, but...
 
 I would investigate either using more spindles if you want
 to stick to RAID6, or perhaps using another RAID level if
 you will be with 4 drives for a while.  The reasoning
 is that there's an overhead with RAID 6 - parity blocks are
 written to 2 disks, so in a 4 drive combination you have 2
 drives with data and 2 with parity.  
 
 With 4 drives, you could get much, much higher performance
 out of RAID10 (which is alternatively called RAID0+1 or
 RAID1+0 depending on the manufacturer and on how accurate
 they wish to be, and on how they actually implemented it,
 too). This would also mean 2 usable drives, as well, so
 you'd have the same space available in RAID10 as your
 proposed RAID6.  
 
 I would confirm you can, on the fly, convert from RAID10 to
 RAID6 after you add more drives.  If you can not, then
 by all means stick with RAID6 now!
 
 With 4 1 TB drives (for simpler examples)
 RAID5 = 3 TB available, 1 TB worth used in parity. 
 Fast reads, slow writes. 
 RAID6 = 2 TB available, 2 TB worth used in parity. 
 Moderately fast reads, slow writes.
 RAID10 = 2 TB available, 2TB in duplicate copies (easier
 work than parity calculations).  Very fast reads,
 moderately fast writes.
 
 When you switch to, say, 8 drives, the numbers start to
 change a bit.
 RAID5 = 7TB available, 1 lost.
 RAID6 = 6TB available, 2 lost.
 RAID10 = 4TB available, 4 lost.
 

Sorry, consider myself chastised for having missed the Security is more 
important than performance bit. I tend toward solutions that show the most 
value, and with 4 drives, it seems that I'd stick with the same data security 
only pick up the free speed of RAID10.  Change when you get to 6 or more 
drives, if necessary.

For data security, I can't answer for the UFS2 vs. ZFS.  For hardware setup, 
let me amend everything I said above with the following:

Since you are seriously focusing on data integrity, ignore everything I said 
but make sure you have good backups!  :)

Sorry, 
-Rich



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-13 Thread Maxim Khitrov
On Mon, Jul 13, 2009 at 1:46 PM, Richard Mahlerweinmahle...@yahoo.com wrote:

 Your mileage may vary, but...

 I would investigate either using more spindles if you want
 to stick to RAID6, or perhaps using another RAID level if
 you will be with 4 drives for a while.  The reasoning
 is that there's an overhead with RAID 6 - parity blocks are
 written to 2 disks, so in a 4 drive combination you have 2
 drives with data and 2 with parity.

 With 4 drives, you could get much, much higher performance
 out of RAID10 (which is alternatively called RAID0+1 or
 RAID1+0 depending on the manufacturer and on how accurate
 they wish to be, and on how they actually implemented it,
 too). This would also mean 2 usable drives, as well, so
 you'd have the same space available in RAID10 as your
 proposed RAID6.

 I would confirm you can, on the fly, convert from RAID10 to
 RAID6 after you add more drives.  If you can not, then
 by all means stick with RAID6 now!

 With 4 1 TB drives (for simpler examples)
 RAID5 = 3 TB available, 1 TB worth used in parity.
 Fast reads, slow writes.
 RAID6 = 2 TB available, 2 TB worth used in parity.
 Moderately fast reads, slow writes.
 RAID10 = 2 TB available, 2TB in duplicate copies (easier
 work than parity calculations).  Very fast reads,
 moderately fast writes.

 When you switch to, say, 8 drives, the numbers start to
 change a bit.
 RAID5 = 7TB available, 1 lost.
 RAID6 = 6TB available, 2 lost.
 RAID10 = 4TB available, 4 lost.


 Sorry, consider myself chastised for having missed the Security is more 
 important than performance bit. I tend toward solutions that show the most 
 value, and with 4 drives, it seems that I'd stick with the same data 
 security only pick up the free speed of RAID10.  Change when you get to 6 or 
 more drives, if necessary.

 For data security, I can't answer for the UFS2 vs. ZFS.  For hardware setup, 
 let me amend everything I said above with the following:

 Since you are seriously focusing on data integrity, ignore everything I said 
 but make sure you have good backups!  :)

 Sorry,
 -Rich

No problem :) I've been doing some reading since I posted this
question and it turns out that the controller will actually not allow
me to create a RAID6 array using only 4 drives. 3ware followed the
same reasoning as you; with 4 drives use RAID10.

I know that you can migrate from one to the other when a 5th disk is
added, but RAID10 can only handle 2 failed drives if they are from
separate RAID1 groups. In this way, it is just slightly less resilient
to failure than RAID6. With this new information, I think I may as
well get one more 2TB drive and start with 6TB of RAID6 space. This
will be less of a headache later on.

- Max
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-13 Thread Richard Mahlerwein

--- On Mon, 7/13/09, Maxim Khitrov mkhit...@gmail.com wrote:

 From: Maxim Khitrov mkhit...@gmail.com
 Subject: Re: ZFS or UFS for 4TB hardware RAID6?
 To: mahle...@yahoo.com
 Cc: Free BSD Questions list freebsd-questions@freebsd.org
 Date: Monday, July 13, 2009, 2:02 PM
 On Mon, Jul 13, 2009 at 1:46 PM,
 Richard Mahlerweinmahle...@yahoo.com
 wrote:
 
  Your mileage may vary, but...
 
  I would investigate either using more spindles if
 you want
  to stick to RAID6, or perhaps using another RAID
 level if
  you will be with 4 drives for a while.  The
 reasoning
  is that there's an overhead with RAID 6 - parity
 blocks are
  written to 2 disks, so in a 4 drive combination
 you have 2
  drives with data and 2 with parity.
 
  With 4 drives, you could get much, much higher
 performance
  out of RAID10 (which is alternatively called
 RAID0+1 or
  RAID1+0 depending on the manufacturer and on how
 accurate
  they wish to be, and on how they actually
 implemented it,
  too). This would also mean 2 usable drives, as
 well, so
  you'd have the same space available in RAID10 as
 your
  proposed RAID6.
 
  I would confirm you can, on the fly, convert from
 RAID10 to
  RAID6 after you add more drives.  If you can not,
 then
  by all means stick with RAID6 now!
 
  With 4 1 TB drives (for simpler examples)
  RAID5 = 3 TB available, 1 TB worth used in
 parity.
  Fast reads, slow writes.
  RAID6 = 2 TB available, 2 TB worth used in
 parity.
  Moderately fast reads, slow writes.
  RAID10 = 2 TB available, 2TB in duplicate copies
 (easier
  work than parity calculations).  Very fast
 reads,
  moderately fast writes.
 
  When you switch to, say, 8 drives, the numbers
 start to
  change a bit.
  RAID5 = 7TB available, 1 lost.
  RAID6 = 6TB available, 2 lost.
  RAID10 = 4TB available, 4 lost.
 
 
  Sorry, consider myself chastised for having missed the
 Security is more important than performance bit. I tend
 toward solutions that show the most value, and with 4
 drives, it seems that I'd stick with the same data
 security only pick up the free speed of RAID10.  Change
 when you get to 6 or more drives, if necessary.
 
  For data security, I can't answer for the UFS2 vs.
 ZFS.  For hardware setup, let me amend everything I said
 above with the following:
 
  Since you are seriously focusing on data integrity,
 ignore everything I said but make sure you have good
 backups!  :)
 
  Sorry,
  -Rich
 
 No problem :) I've been doing some reading since I posted
 this
 question and it turns out that the controller will actually
 not allow
 me to create a RAID6 array using only 4 drives. 3ware
 followed the
 same reasoning as you; with 4 drives use RAID10.
 
 I know that you can migrate from one to the other when a
 5th disk is
 added, but RAID10 can only handle 2 failed drives if they
 are from
 separate RAID1 groups. In this way, it is just slightly
 less resilient
 to failure than RAID6. With this new information, I think I
 may as
 well get one more 2TB drive and start with 6TB of RAID6
 space. This
 will be less of a headache later on.
 
 - Max

Just as a question: how ARE you planning on backing this beast up?  While I 
don't want to sound like a worry-wort, I have had odd things happen at the 
worst of times.  RAID cards fail, power supplies let out the magic smoke, users 
delete items they really want back... *sigh*

A bit of reading shows that ZFS, if it's stable enough, has some really great 
features that would be nice on such a large pile o' drives.  

See http://wiki.freebsd.org/ZFSQuickStartGuide

I guess the last question I'll ask (as any more may uncover my ignorance) is if 
you need to use hardware RAID at all?  It seems both UFS2 and ZFS can do 
software RAID which seems to be quite reasonable with respect to performance 
and in many ways seems to be more robust since it is a bit more portable (no 
specialized hardware).

There are others who may respond with better information on that front.  I've 
been a strong proponent of hardware RAID, but have recently begun to realize 
many of the reasons for that are only of limited validity now.

-Rich



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-13 Thread Maxim Khitrov
On Mon, Jul 13, 2009 at 2:13 PM, Richard Mahlerweinmahle...@yahoo.com wrote:

 --- On Mon, 7/13/09, Maxim Khitrov mkhit...@gmail.com wrote:

 From: Maxim Khitrov mkhit...@gmail.com
 Subject: Re: ZFS or UFS for 4TB hardware RAID6?
 To: mahle...@yahoo.com
 Cc: Free BSD Questions list freebsd-questions@freebsd.org
 Date: Monday, July 13, 2009, 2:02 PM
 On Mon, Jul 13, 2009 at 1:46 PM,
 Richard Mahlerweinmahle...@yahoo.com
 wrote:
 
  Your mileage may vary, but...
 
  I would investigate either using more spindles if
 you want
  to stick to RAID6, or perhaps using another RAID
 level if
  you will be with 4 drives for a while.  The
 reasoning
  is that there's an overhead with RAID 6 - parity
 blocks are
  written to 2 disks, so in a 4 drive combination
 you have 2
  drives with data and 2 with parity.
 
  With 4 drives, you could get much, much higher
 performance
  out of RAID10 (which is alternatively called
 RAID0+1 or
  RAID1+0 depending on the manufacturer and on how
 accurate
  they wish to be, and on how they actually
 implemented it,
  too). This would also mean 2 usable drives, as
 well, so
  you'd have the same space available in RAID10 as
 your
  proposed RAID6.
 
  I would confirm you can, on the fly, convert from
 RAID10 to
  RAID6 after you add more drives.  If you can not,
 then
  by all means stick with RAID6 now!
 
  With 4 1 TB drives (for simpler examples)
  RAID5 = 3 TB available, 1 TB worth used in
 parity.
  Fast reads, slow writes.
  RAID6 = 2 TB available, 2 TB worth used in
 parity.
  Moderately fast reads, slow writes.
  RAID10 = 2 TB available, 2TB in duplicate copies
 (easier
  work than parity calculations).  Very fast
 reads,
  moderately fast writes.
 
  When you switch to, say, 8 drives, the numbers
 start to
  change a bit.
  RAID5 = 7TB available, 1 lost.
  RAID6 = 6TB available, 2 lost.
  RAID10 = 4TB available, 4 lost.
 
 
  Sorry, consider myself chastised for having missed the
 Security is more important than performance bit. I tend
 toward solutions that show the most value, and with 4
 drives, it seems that I'd stick with the same data
 security only pick up the free speed of RAID10.  Change
 when you get to 6 or more drives, if necessary.
 
  For data security, I can't answer for the UFS2 vs.
 ZFS.  For hardware setup, let me amend everything I said
 above with the following:
 
  Since you are seriously focusing on data integrity,
 ignore everything I said but make sure you have good
 backups!  :)
 
  Sorry,
  -Rich

 No problem :) I've been doing some reading since I posted
 this
 question and it turns out that the controller will actually
 not allow
 me to create a RAID6 array using only 4 drives. 3ware
 followed the
 same reasoning as you; with 4 drives use RAID10.

 I know that you can migrate from one to the other when a
 5th disk is
 added, but RAID10 can only handle 2 failed drives if they
 are from
 separate RAID1 groups. In this way, it is just slightly
 less resilient
 to failure than RAID6. With this new information, I think I
 may as
 well get one more 2TB drive and start with 6TB of RAID6
 space. This
 will be less of a headache later on.

 - Max

 Just as a question: how ARE you planning on backing this beast up?  While I 
 don't want to sound like a worry-wort, I have had odd things happen at the 
 worst of times.  RAID cards fail, power supplies let out the magic smoke, 
 users delete items they really want back... *sigh*

Rsync over ssh to another server. Most of the data stored will never
change after the first upload. A daily rsync run will transfer one or
two gigs at the most. History is not required for the same reason;
this is an append-only storage for the most part. A backup for the
previous day is all that is required, but I will keep a weekly backup
as well until I start running out of space.

 A bit of reading shows that ZFS, if it's stable enough, has some really great 
 features that would be nice on such a large pile o' drives.

 See http://wiki.freebsd.org/ZFSQuickStartGuide

 I guess the last question I'll ask (as any more may uncover my ignorance) is 
 if you need to use hardware RAID at all?  It seems both UFS2 and ZFS can do 
 software RAID which seems to be quite reasonable with respect to performance 
 and in many ways seems to be more robust since it is a bit more portable (no 
 specialized hardware).

I've thought about this one a lot. In my case, the hard drives are in
a separate enclosure from the server and the two had to be connected
via SAS cables. The 9690SA-8E card was the best choice I could find
for accessing an external SAS enclosure with support for 8 drives.

I could configure it in JBOD mode and then use software to create a
RAID array. In fact, I will likely do this to compare performance of a
hardware vs. software RAID5 solution. The ZFS RAID-Z option does not
appeal to me, because the read performance does not benefit from
additional drives, and I don't think RAID6 is available in software.
For those reasons I'm

Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-13 Thread Tom Worster
On 7/13/09 3:23 PM, Maxim Khitrov mkhit...@gmail.com wrote:

 On Mon, Jul 13, 2009 at 2:13 PM, Richard Mahlerweinmahle...@yahoo.com wrote:

 I guess the last question I'll ask (as any more may uncover my ignorance) is
 if you need to use hardware RAID at all?  It seems both UFS2 and ZFS can do
 software RAID which seems to be quite reasonable with respect to performance
 and in many ways seems to be more robust since it is a bit more portable (no
 specialized hardware).
 
 I've thought about this one a lot. In my case, the hard drives are in
 a separate enclosure from the server and the two had to be connected
 via SAS cables. The 9690SA-8E card was the best choice I could find
 for accessing an external SAS enclosure with support for 8 drives.
 
 I could configure it in JBOD mode and then use software to create a
 RAID array. In fact, I will likely do this to compare performance of a
 hardware vs. software RAID5 solution.

if you do, please share any insights that come of it here.


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-13 Thread Richard Mahlerwein

--- On Mon, 7/13/09, Maxim Khitrov mkhit...@gmail.com wrote:

 From: Maxim Khitrov mkhit...@gmail.com
 Subject: Re: ZFS or UFS for 4TB hardware RAID6?
 To: mahle...@yahoo.com
 Cc: Free BSD Questions list freebsd-questions@freebsd.org
 Date: Monday, July 13, 2009, 3:23 PM
 On Mon, Jul 13, 2009 at 2:13 PM,
 Richard Mahlerweinmahle...@yahoo.com
 wrote:
 
  --- On Mon, 7/13/09, Maxim Khitrov mkhit...@gmail.com
 wrote:
 
  From: Maxim Khitrov mkhit...@gmail.com
  Subject: Re: ZFS or UFS for 4TB hardware RAID6?
  To: mahle...@yahoo.com
  Cc: Free BSD Questions list freebsd-questions@freebsd.org
  Date: Monday, July 13, 2009, 2:02 PM
  On Mon, Jul 13, 2009 at 1:46 PM,
  Richard Mahlerweinmahle...@yahoo.com
  wrote:
  
   Your mileage may vary, but...
  
   I would investigate either using more
 spindles if
  you want
   to stick to RAID6, or perhaps using
 another RAID
  level if
   you will be with 4 drives for a while. 
 The
  reasoning
   is that there's an overhead with RAID 6 -
 parity
  blocks are
   written to 2 disks, so in a 4 drive
 combination
  you have 2
   drives with data and 2 with parity.
  
   With 4 drives, you could get much, much
 higher
  performance
   out of RAID10 (which is alternatively
 called
  RAID0+1 or
   RAID1+0 depending on the manufacturer and
 on how
  accurate
   they wish to be, and on how they
 actually
  implemented it,
   too). This would also mean 2 usable
 drives, as
  well, so
   you'd have the same space available in
 RAID10 as
  your
   proposed RAID6.
  
   I would confirm you can, on the fly,
 convert from
  RAID10 to
   RAID6 after you add more drives.  If you
 can not,
  then
   by all means stick with RAID6 now!
  
   With 4 1 TB drives (for simpler
 examples)
   RAID5 = 3 TB available, 1 TB worth used
 in
  parity.
   Fast reads, slow writes.
   RAID6 = 2 TB available, 2 TB worth used
 in
  parity.
   Moderately fast reads, slow writes.
   RAID10 = 2 TB available, 2TB in duplicate
 copies
  (easier
   work than parity calculations).  Very
 fast
  reads,
   moderately fast writes.
  
   When you switch to, say, 8 drives, the
 numbers
  start to
   change a bit.
   RAID5 = 7TB available, 1 lost.
   RAID6 = 6TB available, 2 lost.
   RAID10 = 4TB available, 4 lost.
  
  
   Sorry, consider myself chastised for having
 missed the
  Security is more important than performance bit.
 I tend
  toward solutions that show the most value, and
 with 4
  drives, it seems that I'd stick with the same
 data
  security only pick up the free speed of RAID10.
  Change
  when you get to 6 or more drives, if necessary.
  
   For data security, I can't answer for the
 UFS2 vs.
  ZFS.  For hardware setup, let me amend everything
 I said
  above with the following:
  
   Since you are seriously focusing on data
 integrity,
  ignore everything I said but make sure you have
 good
  backups!  :)
  
   Sorry,
   -Rich
 
  No problem :) I've been doing some reading since I
 posted
  this
  question and it turns out that the controller will
 actually
  not allow
  me to create a RAID6 array using only 4 drives.
 3ware
  followed the
  same reasoning as you; with 4 drives use RAID10.
 
  I know that you can migrate from one to the other
 when a
  5th disk is
  added, but RAID10 can only handle 2 failed drives
 if they
  are from
  separate RAID1 groups. In this way, it is just
 slightly
  less resilient
  to failure than RAID6. With this new information,
 I think I
  may as
  well get one more 2TB drive and start with 6TB of
 RAID6
  space. This
  will be less of a headache later on.
 
  - Max
 
  Just as a question: how ARE you planning on backing
 this beast up?  While I don't want to sound like a
 worry-wort, I have had odd things happen at the worst of
 times.  RAID cards fail, power supplies let out the magic
 smoke, users delete items they really want back... *sigh*
 
 Rsync over ssh to another server. Most of the data stored
 will never
 change after the first upload. A daily rsync run will
 transfer one or
 two gigs at the most. History is not required for the same
 reason;
 this is an append-only storage for the most part. A backup
 for the
 previous day is all that is required, but I will keep a
 weekly backup
 as well until I start running out of space.
 
  A bit of reading shows that ZFS, if it's stable
 enough, has some really great features that would be nice on
 such a large pile o' drives.
 
  See http://wiki.freebsd.org/ZFSQuickStartGuide
 
  I guess the last question I'll ask (as any more may
 uncover my ignorance) is if you need to use hardware RAID at
 all?  It seems both UFS2 and ZFS can do software RAID
 which seems to be quite reasonable with respect to
 performance and in many ways seems to be more robust since
 it is a bit more portable (no specialized hardware).
 
 I've thought about this one a lot. In my case, the hard
 drives are in
 a separate enclosure from the server and the two had to be
 connected
 via SAS cables. The 9690SA-8E card was the best choice I

Re: ZFS or UFS for 4TB hardware RAID6?

2009-07-13 Thread FBSD UG


A bit of reading shows that ZFS, if it's stable enough, has some  
really great features that would be nice on such a large pile o'  
drives.


See http://wiki.freebsd.org/ZFSQuickStartGuide

I guess the last question I'll ask (as any more may uncover my  
ignorance) is if you need to use hardware RAID at all?  It seems  
both UFS2 and ZFS can do software RAID which seems to be quite  
reasonable with respect to performance and in many ways seems to be  
more robust since it is a bit more portable (no specialized  
hardware).


I've thought about this one a lot. In my case, the hard drives are in
a separate enclosure from the server and the two had to be connected
via SAS cables. The 9690SA-8E card was the best choice I could find
for accessing an external SAS enclosure with support for 8 drives.

I could configure it in JBOD mode and then use software to create a
RAID array. In fact, I will likely do this to compare performance of a
hardware vs. software RAID5 solution. The ZFS RAID-Z option does not
appeal to me, because the read performance does not benefit from
additional drives, and I don't think RAID6 is available in software.
For those reasons I'm leaning toward a hardware implementation.




Hi Maxim,

RAID-Z2 is the RAID6 double parity option in ZFS.


gr
Arno
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org