[zfs-discuss] zfs zone filesystem creation does not mount if root of pool is not mounted

2009-12-18 Thread Kam
$zpool create dpool mirror c1t2d0 c1t3d0 $zfs set mountpoint=none dpool $zfs create -o mountpoint=/export/zones dpool/zones On Solaris 10 Update 8 when creating a zone with zonecfg and setting the zonepath to /export/zones/test1 and then installing with zoneadm install, the zfs zonepath file

Re: [zfs-discuss] zone's filesystem does not mount at zone create if root of pool not mou

2009-12-18 Thread Kam Lane
A bug is being filed on this by Sun. A Senior Sun Engineer was able to replicate the problem and the only work around they suggested was to temporarily mount the parent filesystem on the pool. This applies to Sol 10 Update 8; not sure about anything else. -- This message posted from

Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread Kam
Are there any performance penalties incurred by mixing vdevs? Say you start with a raidz1 with three 500gb disks. Then over time you add a mirror with 2 1TB disks. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

[zfs-discuss] So close to better, faster, cheaper....

2008-11-21 Thread Kam
Posted for my friend Marko: I've been reading up on ZFS with the idea to build a home NAS. My ideal home NAS would have: - high performance via striping - fault tolerance with selective use of multiple copies attribute - cheap by getting the most efficient space utilization possible (not raidz,

[zfs-discuss] zfs hardware failure questions

2008-11-20 Thread Kam
I was asked a few interesting questions by a fellow co-worker regarding ZFS and after much google-bombing, still can't find answers. I've seen several people try to ask these questions, but only to have been answered indirectly. If I have a pool that consists of a raidz-1 w/ three 500gb disks

[zfs-discuss] Order of operations w/ checksum errors

2008-01-25 Thread Kam
zpool status shows a few checksum errors against 1 device in a raidz1 3 disk array and no read or write errors against that device. The pool marked as degraded. Is there a difference if you clear the errors for the pool before you scrub versus scrubing then clearing the errors? I'm not sure if

[zfs-discuss] mixing raidz1 and raidz2 in same pool

2007-12-06 Thread Kam
Does anyone know if there are any issues mixing one 5+2 raidz2 in the same pool with 6 5+1 raidz1 vdevs? Would there be any performance hit? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] x4500 w/ small random encrypted text files

2007-11-30 Thread Kam
I'm using the thumper as a secondary storage device and therefor am technically only worried about capacity and performance. In regards to availability, if it fails I should be okay as long as I don't also lose the primary storage during the time it takes to recover the secondary [knock on

Re: [zfs-discuss] x4500 w/ small random encrypted text files

2007-11-29 Thread Kam
Thanks everyone. Basically I'll be generating a list of files to grab and doing a wget to pull individual files from an apache web server and then placing them in their respective nested directory location. When it comes time for a restore, I generate another list of files scattered throughout

[zfs-discuss] x4500 w/ small random encrypted text files

2007-11-28 Thread Kam Lane
I'm getting ready to test a thumper (500gig drives/ 16GB) as a backup store for small (avg 2kb) encrypted text files. I'm considering a zpool of 7 x 5+1 raidz1 vdevs to maximize space and provide some level of redundancy carved into about 10 zfs filesystems. Since the files are encrypted,

Re: [zfs-discuss] x4500 w/ small random encrypted text files

2007-11-28 Thread Kam
Point of clarification: I meant recordsize. I'm guessing {from what I've read} that the blocksize is auto-tuned. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org