[zfs-discuss] zfs zone filesystem creation does not mount if root of pool is not mounted

2009-12-18 Thread Kam
$zpool create dpool mirror c1t2d0 c1t3d0
$zfs set mountpoint=none dpool
$zfs create -o mountpoint=/export/zones dpool/zones

On Solaris 10 Update 8 when creating a zone with zonecfg and setting the 
zonepath to /export/zones/test1 and then installing with zoneadm install, the 
zfs zonepath file system gets created but not mounted if the root of the pool 
dpool is not mounted. An ls of /export/zones reveals that instead of a 
filesystem, a directory is created and the zone is installed there. Can anyone 
validate whether or not this is happening in opensolaris? I don't have any free 
drives to test this with.

$zfs list -o name,used,avail,reservation,quota,mounted,mountpoint
NAME USED   AVAIL  RESERV  QUOTA  MOUNTED  MOUNTPOINT
dpool 224M   134Gnone   none   no  none
dpool/zones   224M   134Gnone   none  yes  /export/zones
dpool/zones/test121K   134Gnone   none   no  /export/zones/test1
dpool/zones/test221K   134Gnone   none   no  /export/zones/test2
dpool/zones/testfs   21K   134Gnone   none  yes  /export/zones/testfs

$zoneadm list -vc
  ID NAME STATUS PATH   BRANDIP
   0 global  running/  native   
shared
   3 test1   running/export/zones/test1 native   shared
   4 test2   running/export/zones/test2 native   shared
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zone's filesystem does not mount at zone create if root of pool not mou

2009-12-18 Thread Kam Lane
A bug is being filed on this by Sun. A Senior Sun Engineer was able to 
replicate the problem and the only work around they suggested was to 
temporarily mount the parent filesystem on the pool. This applies to Sol 10 
Update 8; not sure about anything else.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread Kam
Are there any performance penalties incurred by mixing vdevs? Say you start 
with a raidz1 with three 500gb disks. Then over time you add a mirror with 2 
1TB disks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] So close to better, faster, cheaper....

2008-11-21 Thread Kam
Posted for my friend Marko:

I've been reading up on ZFS with the idea to build a home NAS.

My ideal home NAS would have:

- high performance via striping
- fault tolerance with selective use of multiple copies attribute
- cheap by getting the most efficient space utilization possible (not raidz, 
not mirroring)
- scalability


I was hoping to start with 4 1TB disks, in a single striped pool with only some 
filesystems
set to copies=2.

I would be able to survive a single disk failure for my data which was on the 
copies2 filesystem.

(trusting that I had enough free space across multiple disks that copies2 
writes were not placed
on the same physical disk)

I could grow this filesystem just by adding single disks.

Theoretically, at some point in time I would switch to copies=3 to increase my 
chances of surviving
two disk failures. The block checksums would be a useful in early detection of 
failed disks.


The major snag I discovered is that if a striped pool loses a disk, I can still 
read and write from
the remaining data, but I cannot reboot and remount a partial piece of the 
stripe, even with -f.

For example, if I lost some of my single copies data, I'd like to still 
access the good data, pop in a
new (potentially larger) disk, re cp the important data to have multiple 
copies rebuilt, and not have
to rebuild the entire pool structure.


So the feature request would be for zfs to allow selective disk removal from 
striped pools, with the
resultant data loss, but any data that survived, either by chance (living on 
the remaining disks) or
policy (multiple copies) would still be accessible.

Is there some underlying reason in zfs that precludes this functionality?

If the filesystem partially-survives while the striped pool member disk fails 
and the box is still up, why not after a reboot?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs hardware failure questions

2008-11-20 Thread Kam
I was asked a few interesting questions by a fellow co-worker regarding ZFS and 
after much google-bombing, still can't find answers. I've seen several people 
try to ask these questions, but only to have been answered indirectly.

If I have a pool that consists of a raidz-1 w/ three 500gb disks and I go to 
the store and buy a fourth 500gb disk and add it to the pool as the second 
vdev, what happens when that fourth disk has a hardware failure?

The second question is lets say I have two disks and I create a non-parity pool 
[2 vdevs creating /tank] with a single child filesystem [/tank/fscopies2/] in 
the pool with the copies=2 attribute. If I lose one of these disks, will I 
still have access to my files? If you were to add a third disk to this 
filesystem as a third vdev at a future point in time, would there be any 
scenario where a hardware failure would cause the rest of the pool to be 
unreadable?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Order of operations w/ checksum errors

2008-01-25 Thread Kam
zpool status shows a few checksum errors against 1 device in a raidz1 3 disk 
array and no read or write errors against that device. The pool marked as 
degraded. Is there a difference if you clear the errors for the pool before you 
scrub versus scrubing then clearing the errors? I'm not sure if the clearing 
errors prior to a scrub will replicate out any bad blocks that were identified 
as checksum errors previously that had since been cleared.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mixing raidz1 and raidz2 in same pool

2007-12-06 Thread Kam
Does anyone know if there are any issues mixing one 5+2 raidz2 in the same pool 
with 6 5+1 raidz1 vdevs? Would there be any performance hit?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 w/ small random encrypted text files

2007-11-30 Thread Kam
I'm using the thumper as a secondary storage device and therefor am technically 
only worried about capacity and performance. In regards to availability, if it 
fails I should be okay as long as I don't also lose the primary storage during 
the time it takes to recover the secondary [knock on wood].
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 w/ small random encrypted text files

2007-11-29 Thread Kam
Thanks everyone. Basically I'll be generating a list of files to grab and doing 
a wget to pull individual files from an apache web server and then placing them 
in their respective nested directory location. When it comes time for a 
restore, I generate another list of files scattered throughout the directory 
structure and basically scp them to their destination. Additionally, there will 
be multiple simultaneous streams of the wgets writing to their own filesystems 
in the zpool.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] x4500 w/ small random encrypted text files

2007-11-28 Thread Kam Lane
I'm getting ready to test a thumper (500gig drives/ 16GB) as a backup store for 
small (avg 2kb) encrypted text files. I'm considering a zpool of 7 x 5+1 raidz1 
vdevs to maximize space and provide some level of redundancy carved into about 
10 zfs filesystems. Since the files are encrypted, compression is obviously 
out. Is it recommended to tune the zfs blocksize to 2KB for this type of 
implementation? Also, has anyone noticed any performance impacts presenting a 
config like this to a non-global zone?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 w/ small random encrypted text files

2007-11-28 Thread Kam
Point of clarification: I meant recordsize. I'm guessing {from what I've read} 
that the blocksize is auto-tuned.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss