Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-15 Thread Axel Denfeld
Hi, we have the same issue, ESX(i) and Solaris on the Storage. Link Aggregation does not work with ESX(i) (i tried a lot with that for NFS), when you want to use more than one 1G connection you must configure one network or vlan and min. one share for each connection. But this is also limited

Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread Kees Nuyt
On Sun, 14 Nov 2010 23:52:52 PST, sridhar surampudi toyours_srid...@yahoo.co.in wrote: Hi Darren, In shot I am looking a way to freeze and thaw for zfs file system so that for harware snapshot, i can do 1. run zfs freeze 2. run hardware snapshot on devices belongs to the zpool where the given

Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread Andrew Gabriel
sridhar surampudi wrote: Hi Darren, In shot I am looking a way to freeze and thaw for zfs file system so that for harware snapshot, i can do 1. run zfs freeze 2. run hardware snapshot on devices belongs to the zpool where the given file system is residing. 3. run zfs thaw Unlike other

Re: [zfs-discuss] Changing GUID

2010-11-15 Thread Colin Daly
hi Cyril, I also need to change a guid of a zpool (again because of cloning at LUN level has produced a duplicate). Have you a solution? CD -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread sridhar surampudi
Hi Andrew, Regarding your point - You will not be able to access the hardware snapshot from the system which has the original zpool mounted, because the two zpools will have the same pool GUID (there's an RFE outstanding on fixing this). Could you please

[zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread VO
Hello List, I recently got bitten by a panic on `zpool import` problem (same CR 6915314), while testing a ZFS file server. Seems the pool is pretty much gone, did try - zfs:zfs_recover=1 and aok=1 in /etc/system - `zpool import -fF -o ro` to no avail. I don't think I will be taking the time

Re: [zfs-discuss] Changing GUID

2010-11-15 Thread sridhar surampudi
Hi I am looking in similar lines, my requirement is 1. create a zpool on one or many devices ( LUNs ) from an array ( array can be IBM or HPEVA or EMC etc.. not SS7000). 2. Create file systems on zpool 3. Once file systems are in use (I/0 is happening) I need to take snapshot at array level

Re: [zfs-discuss] Changing GUID

2010-11-15 Thread Andrew Gabriel
sridhar surampudi wrote: Hi I am looking in similar lines, my requirement is 1. create a zpool on one or many devices ( LUNs ) from an array ( array can be IBM or HPEVA or EMC etc.. not SS7000). 2. Create file systems on zpool 3. Once file systems are in use (I/0 is happening) I need to

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of VO The server hardware is pretty ghetto with whitebox components such as non-ECC RAM (cause of the pool loss). I know the hardware sucks but sometimes non-technical people don't understand

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Bryan Horstmann-Allen
+-- | On 2010-11-15 10:21:06, Edward Ned Harvey wrote: | | Backups. | | Even if you upgrade your hardware to better stuff... with ECC and so on ... | There is no substitute for backups. Period. If you care about your

Re: [zfs-discuss] Changing GUID

2010-11-15 Thread Torrey McMahon
Are those really your requirements? What is it that you're trying to accomplish with the data? Make a copy and provide to an other host? On 11/15/2010 5:11 AM, sridhar surampudi wrote: Hi I am looking in similar lines, my requirement is 1. create a zpool on one or many devices ( LUNs ) from

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Chad Leigh -- Shire.Net LLC
On Nov 15, 2010, at 8:32 AM, Bryan Horstmann-Allen wrote: +-- | On 2010-11-15 10:21:06, Edward Ned Harvey wrote: | | Backups. | | Even if you upgrade your hardware to better stuff... with ECC and so on ... |

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Toby Thain
On 15/11/10 10:32 AM, Bryan Horstmann-Allen wrote: +-- | On 2010-11-15 10:21:06, Edward Ned Harvey wrote: | | Backups. | | Even if you upgrade your hardware to better stuff... with ECC and so on ... | There is no

[zfs-discuss] Solaris 11 Express

2010-11-15 Thread Wolfraider
Just went to Oracle's website and just noticed that you can download Solaris 11 Express. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] New system, Help needed!

2010-11-15 Thread Frank
I am a newbie on Solaris. We recently purchased a Sun Sparc M3000 server. It comes with 2 identical hard drives. I want to setup a raid 1. After searching on google, I found that the hardware raid was not working with M3000. So I am here to look for help on how to setup ZFS to use raid 1.

[zfs-discuss] Moving rpool disks

2010-11-15 Thread Ray Van Dolson
We need to move the disks comprising our mirrored rpool on a Solaris 10 U9 x86_64 (not SPARC) system. We'll be relocating both drives to a different controller in the same system (should go from c1* to c0*). We're curious as to what the best way is to go about this? We'd love to be able to just

Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread Freddie Cash
On Sun, Nov 14, 2010 at 11:45 PM, sridhar surampudi toyours_srid...@yahoo.co.in wrote: Thanks you for the details. I am aware of export/import of zpool. but with zpool export pool is not available for writes. is there a way I can freeze zfs file system at file system level. As an example,

[zfs-discuss] ZFS Crypto in Oracle Solaris 11 Express

2010-11-15 Thread Darren J Moffat
Today Oracle Solaris 11 Express was released and is available for download[1], this release includes on disk encryption support for ZFS. Using ZFS encryption support can be as easy as this: # zfs create -o encryption=on tank/darren Enter passphrase for 'tank/darren': Enter again:

Re: [zfs-discuss] ZFS Crypto in Oracle Solaris 11 Express

2010-11-15 Thread David Magda
On Mon, November 15, 2010 14:14, Darren J Moffat wrote: Today Oracle Solaris 11 Express was released and is available for download[1], this release includes on disk encryption support for ZFS. Using ZFS encryption support can be as easy as this: # zfs create -o encryption=on tank/darren

Re: [zfs-discuss] Booting fails with `Can not read the pool label' error

2010-11-15 Thread Rainer Orth
Hi Cindy, I haven't seen this in a while but I wonder if you just need to set the bootfs property on your new root pool and/or reapplying the bootblocks. I've created the new BE using beadm create, which did this for me: $ zpool get bootfs rpool2 NAMEPROPERTY VALUE

Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread Ian Collins
On 11/15/10 10:50 PM, sridhar surampudi wrote: Hi Andrew, Regarding your point - You will not be able to access the hardware snapshot from the system which has the original zpool mounted, because the two zpools will have the same pool GUID (there's an RFE outstanding on fixing

Re: [zfs-discuss] New system, Help needed!

2010-11-15 Thread Bryan Horstmann-Allen
+-- | On 2010-11-15 08:48:55, Frank wrote: | | I am a newbie on Solaris. | We recently purchased a Sun Sparc M3000 server. It comes with 2 identical hard drives. I want to setup a raid 1. After searching on google, I

[zfs-discuss] ZFS + L2ARC + Cluster.

2010-11-15 Thread Matt Banks
I asked this on the x86 mailing list (and got a it should work answer), but this is probably more of the appropriate place for it. In a 2 node Sun Cluster (3.2 running Solaris 10 u8, but could be running u9 if needed), we're looking at moving from VXFS to ZFS. However, quite frankly, part of

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Sigbjorn Lie
Do you need registered ECC, or will non-reg ECC do to get around this issue you described? On Mon, 2010-11-15 at 16:48 +0700, VO wrote: Hello List, I recently got bitten by a panic on `zpool import` problem (same CR 6915314), while testing a ZFS file server. Seems the pool is pretty much

Re: [zfs-discuss] ZFS + L2ARC + Cluster.

2010-11-15 Thread Erik Trimble
On 11/15/2010 2:55 PM, Matt Banks wrote: I asked this on the x86 mailing list (and got a it should work answer), but this is probably more of the appropriate place for it. In a 2 node Sun Cluster (3.2 running Solaris 10 u8, but could be running u9 if needed), we're looking at moving from VXFS

[zfs-discuss] ZFS - Sudden decrease in write performance

2010-11-15 Thread Louis
Hey all1 Recently I've decided to implement OpenSolaris as a target for BackupExec. The server I've converted into a Storage Appliance is an IBM x3650 M2 w/ ~4TB of on board storage via ~10 local SATA drives and I'm using OpenSolaris svn_134. I'm using a QLogic 4Gb FC HBA w/ the QLT driver and

Re: [zfs-discuss] ZFS - Sudden decrease in write performance

2010-11-15 Thread Khushil Dep
Set your txg_synctime_ms to 0x3000 and retest please? On 15 Nov 2010 23:23, Louis carreir...@gmail.com wrote: Hey all1 Recently I've decided to implement OpenSolaris as a target for BackupExec. The server I've converted into a Storage Appliance is an IBM x3650 M2 w/ ~4TB of on board storage

[zfs-discuss] Pool versions

2010-11-15 Thread Ian Collins
Is there an up to date reference following on from http://hub.opensolaris.org/bin/view/Community+Group+zfs/24 listing what's in the zpool versions up to the current 31? -- Ian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS + L2ARC + Cluster.

2010-11-15 Thread Matt Banks
On Nov 15, 2010, at 4:15 PM, Erik Trimble wrote: On 11/15/2010 2:55 PM, Matt Banks wrote: I asked this on the x86 mailing list (and got a it should work answer), but this is probably more of the appropriate place for it. In a 2 node Sun Cluster (3.2 running Solaris 10 u8, but could be

Re: [zfs-discuss] ZFS - Sudden decrease in write performance

2010-11-15 Thread Khushil Dep
That controls zfs breathing, I'm on a phone writing this so u hope you won't mind me pointing you to listware.net/201005/opensolaris-zfs/115564-zfs-discuss-small-stalls-slowing-down-rsync-from-holding-network-saturation-every-5-seconds.html On 16 Nov 2010 00:20, Louis Carreiro

Re: [zfs-discuss] ZFS - Sudden decrease in write performance

2010-11-15 Thread Khushil Dep
Points to check are iostat,fsstat, zilstat, mpstat, prstat. Check for sw interrupt sharing, disable ohci. On 16 Nov 2010 00:27, Khushil Dep khushil@gmail.com wrote: That controls zfs breathing, I'm on a phone writing this so u hope you won't mind me pointing you to

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Bryan Horstmann-Allen
+-- | On 2010-11-15 11:27:02, Toby Thain wrote: | | Backups are not going to save you from bad memory writing corrupted data to | disk. | | It is, however, a major motive for using ZFS in the first place. In this

Re: [zfs-discuss] Excruciatingly slow resilvering on X4540 (build 134)

2010-11-15 Thread Mark Sandrock
On Nov 2, 2010, at 12:10 AM, Ian Collins wrote: On 11/ 2/10 08:33 AM, Mark Sandrock wrote: I'm working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported:

Re: [zfs-discuss] [OpenIndiana-discuss] format dumps the core

2010-11-15 Thread Christian Walther
Hi, This may seem odd, but I had errors just like this coming from a faulty CD ROM drive. The drive in question was unable to read the entire media, resulting in the following: 1. Live CD boots up fine, probably took longer than expected. Installation appears to be successful, some drive related

Re: [zfs-discuss] zpool split how it works?

2010-11-15 Thread Craig Cory
From the OpenSolaris ZFS FAQ page: http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq If you want to use a hardware-level backup or snapshot feature instead of the ZFS snapshot feature, then you will need to do the following steps: * zpool export pool-name * Hardware-level

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-15 Thread Mark Sandrock
Edward, I recently installed a 7410 cluster, which had added Fiber Channel HBAs. I know the site also has Blade 6000s running VMware, but no idea if they were planning to run fiber to those blades (or even had the option to do so). But perhaps FC would be an option for you? Mark On Nov 12,

[zfs-discuss] SCSI timeouts with rPool on usb

2010-11-15 Thread Matthew Anderson
I'm currently having a few problems with my storage server. Server specs are - Open Solaris snv_134 Supermicro X8DTi motherboard Intel Xeon 5520 6x 4GB DDR3 LSI RAID Card - running 24x 1.5TB SATA drives Adaptec 2405 - running 4x Intel SSD X25-E's Boot's from 8GB USB flash drive The initial

Re: [zfs-discuss] ZFS - Sudden decrease in write performance

2010-11-15 Thread Louis Carreiro
Almost! It seems like it held out a bit further than last time. Now arcsz hit's 2G (matching 'c'). But it still drops off. It started at 5.6GB/Min and fell off to less than 700MB/Min. A snippet of my arcstat.pl output looks like the following: Time read miss miss% dmis dm% pmis pm%

Re: [zfs-discuss] New system, Help needed!

2010-11-15 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Bryan Horstmann-Allen | | I am a newbie on Solaris. | We recently purchased a Sun Sparc M3000 server. It comes with 2 identical hard drives. I want to setup a raid 1. After searching on

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Toby Thain
On 15/11/10 7:54 PM, Bryan Horstmann-Allen wrote: +-- | On 2010-11-15 11:27:02, Toby Thain wrote: | | Backups are not going to save you from bad memory writing corrupted data to | disk. | | It is, however, a

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Toby Thain The corruption will at least be detected by a scrub, even in cases where it cannot be repaired. Not necessarily. Let's suppose you have some bad memory, and no ECC. Your

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Toby Thain
On 15/11/10 9:28 PM, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Toby Thain The corruption will at least be detected by a scrub, even in cases where it cannot be repaired. Not necessarily. Let's suppose you

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread Sriram Narayanan
To add: Even if you have great faith in ZFS, a backup helps in dealing with the unknown. Consider: - multiple disk failures that you are somehow unable to respond to. - hardware failures (power supplies, motherboard, RAM). - damage to the building. - having to recreate everything elsewhere - even

Re: [zfs-discuss] Ideas for ghetto file server data reliability?

2010-11-15 Thread VO
On Nov 15, 2010, at 8:32 AM, Bryan Horstmann-Allen wrote: + -- | On 2010-11-15 10:21:06, Edward Ned Harvey wrote: | | Backups. | | Even if you upgrade your hardware to better stuff... with ECC and so on

Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread sridhar surampudi
Hi, How it would help for instant recovery or point in time recovery ?? i.e restore data at device/LUN level ? Currently it is easy as I can unwind the primary device stack and restore data at device/ LUN level and recreate stack. Thanks Regards, sridhar. -- This message posted from

Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread Ian Collins
On 11/16/10 07:19 PM, sridhar surampudi wrote: Hi, How it would help for instant recovery or point in time recovery ?? i.e restore data at device/LUN level ? Why would you want to? If you are sending snapshots to another pool, you can do instant recovery at the pool level. Currently

Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread Andrew Gabriel
Sridhar, You have switched to a new disruptive filesystem technology, and it has to be disruptive in order to break out of all the issues older filesystems have, and give you all the new and wonderful features. However, you are still trying to use old filesystem techniques with it, which is

Re: [zfs-discuss] ZFS - Sudden decrease in write performancep

2010-11-15 Thread Khushil Dep
Can you do an iostat -xCzn 3 from the start of test till it drops speed pls? Can you also show: echo ::interrupts ¦ mdb -k On 16 Nov 2010 01:45, Louis Carreiro carreir...@gmail.com wrote: That's for pointing me towards that site! Saying that txg_synctime_ms controls zfs's breathing was how I