Re: [zfs-discuss] zpool upgrade -v

2008-07-03 Thread Walter Faleiro
Hi, I reinstalled our Solaris 10 box using the latest update available. However I could not upgrade the zpool bash-3.00# zpool upgrade -v This system is currently running ZFS version 4. The following versions are supported: VER DESCRIPTION ---

Re: [zfs-discuss] ZFS configuration for VMware

2008-07-03 Thread Ross
Regarding the error checking, as others suggested you're best buying two devices and mirroring them. ZFS has great error checking, why not use it :D http://blogs.sun.com/perrin/entry/slog_blog_or_blogging_on And regarding the memory loss after the battery runs down, that's no different to any

Re: [zfs-discuss] zpool upgrade -v

2008-07-03 Thread Boyd Adamson
Walter Faleiro [EMAIL PROTECTED] writes: GC Warning: Large stack limit(10485760): only scanning 8 MB Hi, I reinstalled our Solaris 10 box using the latest update available. However I could not upgrade the zpool bash-3.00# zpool upgrade -v This system is currently running ZFS version 4.

Re: [zfs-discuss] J4200/J4400 Array

2008-07-03 Thread Mertol Ozyoney
Hi; You are right that J series do not have nvram onboard. However most Jbods like HPS's MSA series have some nvram. The idea behind not using nvram on the Jbod's is -) There is no use to add limited ram to a JBOD as disks already have a lot of cache. -) It's easy to design a redundant Jbod

Re: [zfs-discuss] J4200/J4400 Array

2008-07-03 Thread Mertol Ozyoney
You should be able to buy them today. GA should be next week Mertol Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +90212335 Email [EMAIL PROTECTED] -Original Message- From: Tim [mailto:[EMAIL

Re: [zfs-discuss] J4200/J4400 Array

2008-07-03 Thread James C. McPherson
Mertol Ozyoney wrote: Hi; You are right that J series do not have nvram onboard. However most Jbods like HPS's MSA series have some nvram. The idea behind not using nvram on the Jbod's is -) There is no use to add limited ram to a JBOD as disks already have a lot of cache. -) It's

[zfs-discuss] Large zpool design considerations

2008-07-03 Thread Don Enrique
Hi, I am looking for some best practice advice on a project that i am working on. We are looking at migrating ~40TB backup data to ZFS, with an annual data growth of 20-25%. Now, my initial plan was to create one large pool comprised of X RAIDZ-2 vdevs ( 7 + 2 ) with one hotspare per 10

Re: [zfs-discuss] Large zpool design considerations

2008-07-03 Thread Darren J Moffat
Don Enrique wrote: Now, my initial plan was to create one large pool comprised of X RAIDZ-2 vdevs ( 7 + 2 ) with one hotspare per 10 drives and just continue to expand that pool as needed. Between calculating the MTTDL and performance models i was hit by a rather scary thought. A

Re: [zfs-discuss] Large zpool design considerations

2008-07-03 Thread Don Enrique
Don Enrique wrote: Now, my initial plan was to create one large pool comprised of X RAIDZ-2 vdevs ( 7 + 2 ) with one hotspare per 10 drives and just continue to expand that pool as needed. Between calculating the MTTDL and performance models i was hit by a rather scary thought. A

Re: [zfs-discuss] Large zpool design considerations

2008-07-03 Thread Bob Friesenhahn
On Thu, 3 Jul 2008, Don Enrique wrote: This means that i potentially could loose 40TB+ of data if three disks within the same RAIDZ-2 vdev should die before the resilvering of at least one disk is complete. Since most disks will be filled i do expect rather long resilvering times. Yes,

Re: [zfs-discuss] Large zpool design considerations

2008-07-03 Thread Chris Cosby
I'm going down a bit of a different path with my reply here. I know that all shops and their need for data are different, but hear me out. 1) You're backing up 40TB+ of data, increasing at 20-25% per year. That's insane. Perhaps it's time to look at your backup strategy no from a hardware

Re: [zfs-discuss] J4200/J4400 Array

2008-07-03 Thread Albert Chin
On Thu, Jul 03, 2008 at 01:43:36PM +0300, Mertol Ozyoney wrote: You are right that J series do not have nvram onboard. However most Jbods like HPS's MSA series have some nvram. The idea behind not using nvram on the Jbod's is -) There is no use to add limited ram to a JBOD as disks already

Re: [zfs-discuss] Large zpool design considerations

2008-07-03 Thread Miles Nordin
djm == Darren J Moffat [EMAIL PROTECTED] writes: bf == Bob Friesenhahn [EMAIL PROTECTED] writes: djm Why are you planning on using RAIDZ-2 rather than mirroring ? isn't MTDL sometimes shorter for mirroring than raidz2? I think that is the biggest point of raidz2, is it not? bf The

Re: [zfs-discuss] J4200/J4400 Array

2008-07-03 Thread Richard Elling
Albert Chin wrote: On Thu, Jul 03, 2008 at 01:43:36PM +0300, Mertol Ozyoney wrote: You are right that J series do not have nvram onboard. However most Jbods like HPS's MSA series have some nvram. The idea behind not using nvram on the Jbod's is -) There is no use to add limited ram to

Re: [zfs-discuss] Large zpool design considerations

2008-07-03 Thread Henrik Johansen
[Richard Elling] wrote: Don Enrique wrote: Hi, I am looking for some best practice advice on a project that i am working on. We are looking at migrating ~40TB backup data to ZFS, with an annual data growth of 20-25%. Now, my initial plan was to create one large pool comprised of X

[zfs-discuss] Why RAID 5 stops working in 2009

2008-07-03 Thread Jim
Anyone here read the article Why RAID 5 stops working in 2009 at http://blogs.zdnet.com/storage/?p=162 Does RAIDZ have the same chance of unrecoverable read error as RAID5 in Linux if the RAID has to be rebuilt because of a faulty disk? I imagine so because of the physical constraints that

Re: [zfs-discuss] Large zpool design considerations

2008-07-03 Thread Richard Elling
Miles Nordin wrote: djm == Darren J Moffat [EMAIL PROTECTED] writes: bf == Bob Friesenhahn [EMAIL PROTECTED] writes: djm Why are you planning on using RAIDZ-2 rather than mirroring ? isn't MTDL sometimes shorter for mirroring than raidz2? I think that is the biggest point

Re: [zfs-discuss] Large zpool design considerations

2008-07-03 Thread Bob Friesenhahn
On Thu, 3 Jul 2008, Richard Elling wrote: nit: SATA disks are single port, so you would need a SAS implementation to get multipathing to the disks. This will not significantly impact the overall availability of the data, however. I did an availability analysis of thumper to show this.

Re: [zfs-discuss] Why RAID 5 stops working in 2009

2008-07-03 Thread Aaron Blew
My take is that since RAID-Z creates a stripe for every block (http://blogs.sun.com/bonwick/entry/raid_z), it should be able to rebuild the bad sectors on a per block basis. I'd assume that the likelihood of having bad sectors on the same places of all the disks is pretty low since we're only

[zfs-discuss] Kota, Sudha is out of the office.

2008-07-03 Thread Sudha_Kota
I will be out of the office starting 07/03/2008 and will not return until 07/07/2008. Please contact George Mederos, Shawn Luft or Bernard Wu for Unix Support. The information contained in this e-mail message is intended only for the personal and confidential use of the recipient(s) named

[zfs-discuss] Poor read/write performance when using ZFS iSCSI target

2008-07-03 Thread Cody Campbell
Greetings, I want to take advantage of the iSCSI target support in the latest release (svn_91) of OpenSolaris, and I'm running into some performance problems when reading/writing from/to my target. I'm including as much detail as I can so bear with me here... I've built an x86 OpenSolaris

Re: [zfs-discuss] Why RAID 5 stops working in 2009

2008-07-03 Thread Mike Gerdts
On Thu, Jul 3, 2008 at 3:09 PM, Aaron Blew [EMAIL PROTECTED] wrote: My take is that since RAID-Z creates a stripe for every block (http://blogs.sun.com/bonwick/entry/raid_z), it should be able to rebuild the bad sectors on a per block basis. I'd assume that the likelihood of having bad