Re: [zfs-discuss] Finding SATA cards for ZFS; was Lundman home NAS

2009-09-01 Thread James Andrewartha
Jorgen Lundman wrote: The mv8 is a marvell based chipset, and it appears there are no Solaris drivers for it. There doesn't appear to be any movement from Sun or marvell to provide any either. Do you mean specifically Marvell 6480 drivers? I use both DAC-SATA-MV8 and AOC-SAT2-MV8, which use

Re: [zfs-discuss] Remove the zfs snapshot keeping the original volume and clone

2009-09-01 Thread Henrik Bjornstrom - Sun Microsystems
Thanks for tha answers. Lori Alt wrote: On 08/31/09 08:30, Henrik Bjornstrom - Sun Microsystems wrote: Hi ! Have anyone given an answer to this that I have missed ? I have a customer that have the same question and I want to give him a correct answer. /Henrik Ketan wrote: I created a

Re: [zfs-discuss] ZFS commands hang after several zfs receives

2009-09-01 Thread Andrew Robert Nicols
On Sat, Aug 29, 2009 at 10:09:00AM +1200, Ian Collins wrote: I have a case open for this problem on Solaris 10u7. Interesting. One of our thumpers was previously running snv_112 and experiencing these issues. Switching to 10u7 has cured it and it's been stable now for several months. The case

Re: [zfs-discuss] ub_guid_sum and vdev guids

2009-09-01 Thread P. Anil Kumar
14408718082181993222 + 4867536591080553814 - 2^64 + 4015976099930560107 = 484548669948327 there was an overflow inbetween, that I overlooked. pak -- This message posted from opensolaris.org ___ zfs-discuss mailing list

[zfs-discuss] pkg image-update to snv 121 - shouldn't ZFS version be upgraded on /rpool

2009-09-01 Thread Per Öberg
When I check -- # pfexec zpool status rpool pool: rpool state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will

Re: [zfs-discuss] pkg image-update to snv 121 - shouldn't ZFS version be upgraded on /rpool

2009-09-01 Thread Casper . Dik
When I check -- # pfexec zpool status rpool pool: rpool state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will

Re: [zfs-discuss] pkg image-update to snv 121 - shouldn't ZFS version be upgraded on /rpool

2009-09-01 Thread Peter Dennis - Sustaining Engineer
Per Öberg wrote: When I check -- # pfexec zpool status rpool pool: rpool state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done,

Re: [zfs-discuss] pkg image-update to snv 121 - shouldn't ZFS version be upgraded on /rpool

2009-09-01 Thread Darren J Moffat
Per Öberg wrote: When I check -- # pfexec zpool status rpool pool: rpool state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the

Re: [zfs-discuss] pkg image-update to snv 121 - shouldn't ZFS version be upgraded on /rpool

2009-09-01 Thread Per Öberg
Thanks for all the answers, I've now cleared out the old BEs and upgraded the pools and everything just works as expected. /Per -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread John-Paul Drawneek
i did not migrate my disks. I now have 2 pools - rpool is at 60% as is still dog slow. Also scrubbing the rpool causes the box to lock up. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] order bug, legacy mount and nfs sharing

2009-09-01 Thread kurosan
Hi kurosan, I met the same but probably it cannot work. check zfs get all your_pool_mounted_/pathname you can see 'mountpoint' is 'legacy' so you have to use zfs sharenfs=on again to try Hi, thanks for the reply... I've had time only today to retry. I've re-enabled zfs sharenfs=on but nfs

[zfs-discuss] ZFS incremental backup restore

2009-09-01 Thread Amir Javanshir
Hi all; I'm currently working on a small cookbook to showcase the backup and restore capabilities of ZFS using snapshots. I chose to backup the data directory of a MySQL 5.1 server for the example purpose, using several backup/restore scenarios. The most simple is to simple snapshot the file

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread Bob Friesenhahn
On Tue, 1 Sep 2009, John-Paul Drawneek wrote: i did not migrate my disks. I now have 2 pools - rpool is at 60% as is still dog slow. Also scrubbing the rpool causes the box to lock up. This sounds like a hardware problem and not something related to fragmentation. Probably you have a

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Jason
So aside from the NFS debate, would this 2 tier approach work? I am a bit fuzzy on how I would get the RAIDZ2 redundancy but still present the volume to the VMware host as a raw device. Is that possible or is my understanding wrong? Also could it be defined as a clustered resource? -- This

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Richard Elling
On Sep 1, 2009, at 11:45 AM, Jason wrote: So aside from the NFS debate, would this 2 tier approach work? I am a bit fuzzy on how I would get the RAIDZ2 redundancy but still present the volume to the VMware host as a raw device. Is that possible or is my understanding wrong? Also could

Re: [zfs-discuss] zfs performance cliff when over 80% util, still occuring when pool in 6

2009-09-01 Thread Bob Friesenhahn
On Tue, 1 Sep 2009, Jpd wrote: Thanks. Any idea on how to work out which one. I can't find smart in ips, so what other ways are there? You could try using a script like this one to find pokey disks: #!/bin/ksh # Date: Mon, 14 Apr 2008 15:49:41 -0700 # From: Jeff Bonwick

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Jason
True, though an enclosure for shared disks is expensive. This isn't for production but for me to explore what I can do with x86/x64 hardware. The idea being that I can just throw up another x86/x64 box to add more storage. Has anyone tried anything similar? -- This message posted from

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Tim Cook
On Tue, Sep 1, 2009 at 2:17 PM, Jason wheelz...@hotmail.com wrote: True, though an enclosure for shared disks is expensive. This isn't for production but for me to explore what I can do with x86/x64 hardware. The idea being that I can just throw up another x86/x64 box to add more storage.

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Richard Elling
On Sep 1, 2009, at 12:17 PM, Jason wrote: True, though an enclosure for shared disks is expensive. This isn't for production but for me to explore what I can do with x86/x64 hardware. The idea being that I can just throw up another x86/x64 box to add more storage. Has anyone tried

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Jason
I guess I should come at it from the other side: If you have 1 iscsi target box and it goes down, you're dead in the water. If you have 2 iscsi target boxes that replicate and one dies, you are OK but you then have to have a 2:1 total storage to usable ratio (excluding expensive shared disks).

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Scott Meilicke
You are completely off your rocker :) No, just kidding. Assuming the virtual front-end servers are running on different hosts, and you are doing some sort of raid, you should be fine. Performance may be poor due to the inexpensive targets on the back end, but you probably know that. A while

Re: [zfs-discuss] ZFS iSCSI Clustered for VMware Host use

2009-09-01 Thread Richard Elling
On Sep 1, 2009, at 1:28 PM, Jason wrote: I guess I should come at it from the other side: If you have 1 iscsi target box and it goes down, you're dead in the water. Yep. If you have 2 iscsi target boxes that replicate and one dies, you are OK but you then have to have a 2:1 total

[zfs-discuss] high speed at 7,200 rpm

2009-09-01 Thread Richard Elling
FYI, Western Digital shipping high-speed 2TB hard drive http://news.cnet.com/8301-17938_105-10322886-1.html?tag=newsEditorsPicksArea.0 I'm not sure how many people think 7,200 rpm is high speed but, hey, it is better than 5,900 rpm :-) -- richard ___

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-09-01 Thread Adam Leventhal
Hi James, After investigating this problem a bit I'd suggest avoiding deploying RAID-Z until this issue is resolved. I anticipate having it fixed in build 124. Apologies for the inconvenience. Adam On Aug 28, 2009, at 8:20 PM, James Lever wrote: On 28/08/2009, at 3:23 AM, Adam Leventhal

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-09-01 Thread James Lever
On 02/09/2009, at 9:54 AM, Adam Leventhal wrote: After investigating this problem a bit I'd suggest avoiding deploying RAID-Z until this issue is resolved. I anticipate having it fixed in build 124. Thanks for the status update on this Adam. cheers, James

Re: [zfs-discuss] order bug, legacy mount and nfs sharing

2009-09-01 Thread Masafumi Ohta
On 2009/09/01, at 22:15, kurosan wrote: Hi kurosan, I met the same but probably it cannot work. check zfs get all your_pool_mounted_/pathname you can see 'mountpoint' is 'legacy' so you have to use zfs sharenfs=on again to try Hi, thanks for the reply... I've had time only today to retry.