Re: [zfs-discuss] Rethinking my zpool

2010-03-20 Thread Brandon High
On Sat, Mar 20, 2010 at 1:35 PM, Richard Elling wrote: > For those disinclined to click, data retention when mirroring wins over > raidz > when looking at the problem from the perspective of number of drives > available. Why? Because 5+1 raidz survives the loss of any disk, but 3 > sets > of 2-wa

Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-20 Thread Erik Trimble
Nah, the 8x2.5"-in-2 are $220, while the 5x3.5"-in-3 are $120. You can get 4x3.5"-in-3 for $100, 3x3.5"-in-2 for $80, and even 4x2.5"-in-1 for $65. ( http://www.addonics.com/products/raid_system/ae4rcs25nsa.asp ) The Cool Master thing you linked to isn't a Hot Swap module. It does 4-in-3, b

Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-20 Thread Ethan
Whoops, Erik's links show I was wrong about my first point. Though those 5-in-3s are five times as expensive as the 4-in-3. On Sat, Mar 20, 2010 at 22:46, Ethan wrote: > I don't think you can fit five 3.5" drives in 3 x 5.25", but I have a > number of coolermaster 4-in-3 modules, I recommend the

Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-20 Thread Ethan
I don't think you can fit five 3.5" drives in 3 x 5.25", but I have a number of coolermaster 4-in-3 modules, I recommend them: http://www.amazon.com/-/dp/B00129CDGC/ On Sat, Mar 20, 2010 at 20:23, Geoff wrote: > Thanks for your review! My SiI3114 isn't recognizing drives in Opensolaris > so I'v

Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-20 Thread Erik Trimble
Geoff wrote: Thanks for your review! My SiI3114 isn't recognizing drives in Opensolaris so I've been looking for a replacement. This card seems perfect so I ordered one last night. Can anyone recommend a cheap 3 x 5.25 ---> 5 3.5 enclosure I could use with this card? The extra ports necess

Re: [zfs-discuss] sympathetic (or just multiple) drive failures

2010-03-20 Thread Bill Sommerfeld
On 03/19/10 19:07, zfs ml wrote: What are peoples' experiences with multiple drive failures? 1985-1986. DEC RA81 disks. Bad glue that degraded at the disk's operating temperature. Head crashes. No more need be said. - Bill __

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Robert Milkowski
To add my 0.2 cents... I think starting/stopping scrub belongs to cron, smf, etc. and not to zfs itself. However what would be nice to have is an ability to freeze/resume a scrub and also limit its rate of scrubbing. One of the reason is that when working in SAN environments one have to tak

Re: [zfs-discuss] Intel SASUC8I - worth every penny

2010-03-20 Thread Geoff
Thanks for your review! My SiI3114 isn't recognizing drives in Opensolaris so I've been looking for a replacement. This card seems perfect so I ordered one last night. Can anyone recommend a cheap 3 x 5.25 ---> 5 3.5 enclosure I could use with this card? The extra ports necessitate more driv

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Tim Cook
On Sat, Mar 20, 2010 at 5:36 PM, Bob Friesenhahn < bfrie...@simple.dallas.tx.us> wrote: > On Sat, 20 Mar 2010, Tim Cook wrote: > >> >> Funny (ironic?) you'd quote the UNIX philosophy when the Linux folks have >> been running around since day >> one claiming the basic concept of ZFS fly's in the fa

Re: [zfs-discuss] sympathetic (or just multiple) drive failures

2010-03-20 Thread Svein Skogen
On 21.03.2010 00:14, Erik Trimble wrote: Richard Elling wrote: I see this on occasion. However, the cause is rarely attributed to a bad batch of drives. More common is power supplies, HBA firmware, cables, Pepsi syndrome, or similar. -- richard Mmmm. Pepsi Syndrome. I take it this is similar to

Re: [zfs-discuss] sympathetic (or just multiple) drive failures

2010-03-20 Thread Erik Trimble
Richard Elling wrote: I see this on occasion. However, the cause is rarely attributed to a bad batch of drives. More common is power supplies, HBA firmware, cables, Pepsi syndrome, or similar. -- richard Mmmm. Pepsi Syndrome. I take it this is similar to the Coke addiction many of my keyboa

Re: [zfs-discuss] Q : recommendations for zpool configuration

2010-03-20 Thread Bob Friesenhahn
On Sat, 20 Mar 2010, Eric Andersen wrote: 2. Taking into account the above, it's a great deal easier on the pocket book to expand two drives at a time instead of four at a time. As bigger drives are always getting cheaper, I feel that I have a lot more flexibility with mirrors when it comes

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Bob Friesenhahn
On Sat, 20 Mar 2010, Tim Cook wrote: Funny (ironic?) you'd quote the UNIX philosophy when the Linux folks have been running around since day one claiming the basic concept of ZFS fly's in the face of that very concept.   Rather than do one thing well, it's unifying two things (file system and r

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Svein Skogen
On 20.03.2010 23:00, Gary Gendel wrote: I'm not sure I like this at all. Some of my pools take hours to scrub. I have a cron job run scrubs in sequence... Start one pool's scrub and then poll until it's finished, start the next and wait, and so on so I don't create too much load and bring a

Re: [zfs-discuss] Q : recommendations for zpool configuration

2010-03-20 Thread Eric Andersen
I went through this determination when setting up my pool. I decided to go with mirrors instead of raidz2 after considering the following: 1. Drive capacity in my box. At most, I can realistically cram 10 drives in my box and I am not interested in expanding outside of the box. I could go w

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Tim Cook
On Sat, Mar 20, 2010 at 5:00 PM, Gary Gendel wrote: > I'm not sure I like this at all. Some of my pools take hours to scrub. I > have a cron job run scrubs in sequence... Start one pool's scrub and then > poll until it's finished, start the next and wait, and so on so I don't > create too much

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Tim Cook
On Sat, Mar 20, 2010 at 4:00 PM, Richard Elling wrote: > On Mar 20, 2010, at 12:07 PM, Svein Skogen wrote: > > We all know that data corruption may happen, even on the most reliable of > hardware. That's why zfs har pool scrubbing. > > > > Could we introduce a zpool option (as in zpool set ) > fo

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Gary Gendel
I'm not sure I like this at all. Some of my pools take hours to scrub. I have a cron job run scrubs in sequence... Start one pool's scrub and then poll until it's finished, start the next and wait, and so on so I don't create too much load and bring all I/O to a crawl. The job is launched on

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Richard Elling
On Mar 20, 2010, at 12:07 PM, Svein Skogen wrote: > We all know that data corruption may happen, even on the most reliable of > hardware. That's why zfs har pool scrubbing. > > Could we introduce a zpool option (as in zpool set ) for > "scrub period", in "number of hours" (with 0 being no autom

Re: [zfs-discuss] sympathetic (or just multiple) drive failures

2010-03-20 Thread Bob Friesenhahn
On Fri, 19 Mar 2010, zfs ml wrote: same enclosure, same rack, etc for a given raid 5/6/z1/z2/z3 system, should we be paying more attention to harmonics, vibration/isolation and non-intuitive system level statistics that might be inducing close proximity drive failures rather than just throwing

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-20 Thread Richard Elling
On Mar 18, 2010, at 6:28 AM, Darren J Moffat wrote: > The only tool I'm aware of today that provides a copy of the data, and all of > the ZPL metadata and all the ZFS dataset properties is 'zfs send'. AFAIK, this is correct. Further, the only type of tool that can backup a pool is a tool like

Re: [zfs-discuss] Rethinking my zpool

2010-03-20 Thread Richard Elling
On Mar 19, 2010, at 5:32 AM, Chris Dunbar - Earthside, LLC wrote: > Hello, > > After being immersed in this list and other ZFS sites for the past few weeks > I am having some doubts about the zpool layout on my new server. It's not too > late to make a change so I thought I would ask for commen

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Svein Skogen
On 20.03.2010 20:53, Giovanni Tirloni wrote: On Sat, Mar 20, 2010 at 4:07 PM, Svein Skogen mailto:sv...@stillbilde.net>> wrote: We all know that data corruption may happen, even on the most reliable of hardware. That's why zfs har pool scrubbing. Could we introduce a zpool option (a

Re: [zfs-discuss] sympathetic (or just multiple) drive failures

2010-03-20 Thread Richard Elling
On Mar 19, 2010, at 7:07 PM, zfs ml wrote: > Most discussions I have seen about RAID 5/6 and why it stops "working" seem > to base their conclusions solely on single drive characteristics and > statistics. > It seems to me there is a missing component in the discussion of drive > failures in the

Re: [zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Giovanni Tirloni
On Sat, Mar 20, 2010 at 4:07 PM, Svein Skogen wrote: > We all know that data corruption may happen, even on the most reliable of > hardware. That's why zfs har pool scrubbing. > > Could we introduce a zpool option (as in zpool set ) for > "scrub period", in "number of hours" (with 0 being no aut

Re: [zfs-discuss] is this pool recoverable?

2010-03-20 Thread Patrick Tiquet
Thanks for the info. I'll try the live CD method when I have access to the system next week. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread David Magda
On Mar 20, 2010, at 14:37, Remco Lengers wrote: You seem to be concerned about the availability? Open HA seems to be a package last updated in 2005 (version 0.3.6). (?) It seems to me like a real fun toy project to build but I would be pretty reserved about the actual availability and putti

[zfs-discuss] Proposition of a new zpool property.

2010-03-20 Thread Svein Skogen
We all know that data corruption may happen, even on the most reliable of hardware. That's why zfs har pool scrubbing. Could we introduce a zpool option (as in zpool set ) for "scrub period", in "number of hours" (with 0 being no automatic scrubbing). I see several modern raidcontrollers (s

Re: [zfs-discuss] is this pool recoverable?

2010-03-20 Thread Sriram Narayanan
On Sun, Mar 21, 2010 at 12:32 AM, Miles Nordin wrote: >> "sn" == Sriram Narayanan writes: > >    sn> http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view > > yeah, but he has no slog, and he says 'zpool clear' makes the system > panic and reboot, so even from way over here that link looks u

Re: [zfs-discuss] is this pool recoverable?

2010-03-20 Thread Miles Nordin
> "sn" == Sriram Narayanan writes: sn> http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view yeah, but he has no slog, and he says 'zpool clear' makes the system panic and reboot, so even from way over here that link looks useless. Patrick, maybe try a newer livecd from genunix.org lik

Re: [zfs-discuss] 3 disk RAID-Z2 pool

2010-03-20 Thread Svein Skogen
On 20.03.2010 17:39, Henk Langeveld wrote: On 2010-03-15 16:50, Khyron: Yeah, this threw me. A 3 disk RAID-Z2 doesn't make sense, because at a redundancy level, RAID-Z2 looks like RAID 6. That is, there are 2 levels of parity for the data. Out of 3 disks, the equivalent of 2 disks will be used t

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Remco Lengers
Vikkr, You seem to be concerned about the availability? Open HA seems to be a package last updated in 2005 (version 0.3.6). (?) It seems to me like a real fun toy project to build but I would be pretty reserved about the actual availability and putting using these kind of setup for production

Re: [zfs-discuss] Usage of hot spares and hardware allocation capabilities.

2010-03-20 Thread Bob Friesenhahn
On Sat, 20 Mar 2010, Robin Axelsson wrote: My idea is rather that the "hot spares" (or perhaps we should say "cold spares" then) are off all the time until they are needed or when a user initiated/scheduled system integrity check is being conducted. They could go up for a "test spin" at each o

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Ross Walker
On Mar 20, 2010, at 11:48 AM, vikkr wrote: THX Ross, i plan exporting each drive individually over iSCSI. I this case, the write, as well as reading, will go to all 6 discs at once, right? The only question - how to calculate fault tolerance of such a system if the discs are all different

Re: [zfs-discuss] 3 disk RAID-Z2 pool

2010-03-20 Thread Henk Langeveld
On 2010-03-15 16:50, Khyron: Yeah, this threw me. A 3 disk RAID-Z2 doesn't make sense, because at a redundancy level, RAID-Z2 looks like RAID 6. That is, there are 2 levels of parity for the data. Out of 3 disks, the equivalent of 2 disks will be used to store redundancy (parity) data and only

Re: [zfs-discuss] is this pool recoverable?

2010-03-20 Thread Sriram Narayanan
On Sat, Mar 20, 2010 at 9:19 PM, Patrick Tiquet wrote: > Also, I tried to run zpool clear, but the system crashes and reboots. Please see if this link helps http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view -- Sriram - Belenix: www.belenix.org ___

Re: [zfs-discuss] is this pool recoverable?

2010-03-20 Thread Patrick Tiquet
Also, I tried to run zpool clear, but the system crashes and reboots. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-20 Thread David Magda
On Mar 20, 2010, at 00:57, Edward Ned Harvey wrote: I used NDMP up till November, when we replaced our NetApp with a Solaris Sun box. In NDMP, to choose the source files, we had the ability to browse the fileserver, select files, and specify file matching patterns. My point is: NDMP is fi

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread vikkr
THX Ross, i plan exporting each drive individually over iSCSI. I this case, the write, as well as reading, will go to all 6 discs at once, right? The only question - how to calculate fault tolerance of such a system if the discs are all different in size? Maybe there is such a tool? or check? --

[zfs-discuss] is this pool recoverable?

2010-03-20 Thread Patrick Tiquet
This system is running stock 111b runinng on an Intel Atom D945GCLF2 motherboard. The pool is of two mirrored 1TB sata disks. I noticed the system was locked up, rebooted and the pool status shows as follows: pool: atomfs state: FAULTED status: An intent log record could not be read.

Re: [zfs-discuss] ISCSI + RAID-Z + OpenSolaris HA

2010-03-20 Thread Ross Walker
On Mar 20, 2010, at 10:18 AM, vikkr wrote: Hi sorry for bad eng and picture :). Can such a decision? 3 servers openfiler give their drives 2 - 1 tb ISCSI server to OpenSolaris On OpenSolaris assembled a RAID-Z with double parity. Server OpenSolaris provides NFS access to this array, and du

Re: [zfs-discuss] Usage of hot spares and hardware allocation capabilities.

2010-03-20 Thread Tonmaus
> I know about those SoHo boxes and the whatnot, they > keep spinning up and down all the time and the worst > thing is that you cannot disable this sleep/powersave > feature on most of these devices. That to judge is in the eye of the beholder. We have a couple of Thecus NAS boxes and some LVM R

Re: [zfs-discuss] Validating alignment of NTFS/VMDK/ZFS blocks

2010-03-20 Thread Chris Murray
That's a good idea, thanks. I get the feeling the remainder won't be zero, which will back up the misalignment theory. After a bit more digging, it seems the problem is just an NTFS issue and can be addressed irrespective of underlying storage system. I think I'm going to try the process in the

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-20 Thread Edward Ned Harvey
> 5+ years ago the variety of NDMP that was available with the > combination of NetApp's OnTap and Veritas NetBackup did backups at the > volume level. When I needed to go to tape to recover a file that was > no longer in snapshots, we had to find space on a NetApp to restore > the volume. It cou

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-20 Thread Edward Ned Harvey
> > I'll say it again: neither 'zfs send' or (s)tar is an > > enterprise (or > > even home) backup system on their own one or both can > > be components of > > the full solution. > > > > Up to a point. zfs send | zfs receive does make a very good back up > scheme for the home user with a moderate

Re: [zfs-discuss] Usage of hot spares and hardware allocation capabilities.

2010-03-20 Thread Robin Axelsson
I know about those SoHo boxes and the whatnot, they keep spinning up and down all the time and the worst thing is that you cannot disable this sleep/powersave feature on most of these devices. I believe I have seen a "sleep mode" support when I skimmed through the feature lists of the LSI contro

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-20 Thread Mike Gerdts
On Fri, Mar 19, 2010 at 11:57 PM, Edward Ned Harvey wrote: >> 1. NDMP for putting "zfs send" streams on tape over the network.  So > > Tell me if I missed something here.  I don't think I did.  I think this > sounds like crazy talk. > > I used NDMP up till November, when we replaced our NetApp wit

Re: [zfs-discuss] Thoughts on ZFS Pool Backup Strategies

2010-03-20 Thread Chris Gerhard
> > I'll say it again: neither 'zfs send' or (s)tar is an > enterprise (or > even home) backup system on their own one or both can > be components of > the full solution. > Up to a point. zfs send | zfs receive does make a very good back up scheme for the home user with a moderate amount of s

Re: [zfs-discuss] Usage of hot spares and hardware allocation capabilities.

2010-03-20 Thread Tonmaus
> So, is there a > sleep/hibernation/standby mode that the hot spares > operate in or are they on all the time regardless of > whether they are in use or not? This depends on the power-save options of your hardware, not on ZFS. Arguably, there is less ware on the heads for a hot spare. I guess th