Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-28 Thread Jan Hellevik
I think the 'corruption' is caused by the shuffling and mismatch of the disks. One 1.5TB is now believed to be part of a mirror with a 2TB, a 1TB part of a mirror with a 1.5TB and so on. It would be better if zfs would try to find the second disk of each mirror instead of relying on what

Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-28 Thread David Magda
On Oct 28, 2010, at 04:44, Jan Hellevik wrote: So, my best action would be to delete the zpool.cache and then do a zpool import? Should I try to match disks with cables as it was previously connected before I do the import? Will that make any difference? BTW, ZFS version is 22. I'd say

[zfs-discuss] Good write, but slow read speeds over the network

2010-10-28 Thread Stephan Budach
Hi all, I am running Netalk on OSol snv134 on a Dell R610, 32 GB RAM server. I am experiencing different speeds when when writing to and reading from the pool. The pool itself consists of two FC LUNs that each build a vdev (no comments on that please, we discussed that already! ;) ). Now, I

Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-28 Thread Jan Hellevik
Thanks! I will try later today and report back the result. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-10-28 Thread Mariusz
Hi. I install solaris 10 x86 on PowerEdge R510 with PERC H700 without problem. 8HDD configured with RAID 6. Only question is how to monitor this controller? Do you have any tools which allow you to monitor this controller? Get HDD status. Thank you for help. PS. I know this is OpenSolaris not

Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-10-28 Thread Kyle McDonald
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 8/7/2010 4:11 PM, Terry Hull wrote: It is just that lots of the PERC controllers do not do JBOD very well. I've done it several times making a RAID 0 for each drive. Unfortunately, that means the server has lots of RAID hardware that is not

[zfs-discuss] sharesmb should be ignored if filesystem is not mounted

2010-10-28 Thread Richard L. Hamilton
I have sharesmb=on set for a bunch of filesystems, including three that weren't mounted. Nevertheless, all of those are advertised. Needless to say, the one that isn't mounted can't be accessed remotely, even though since advertised, it looks like it could be. # zfs list -o

Re: [zfs-discuss] sharesmb should be ignored if filesystem is not mounted

2010-10-28 Thread Richard L. Hamilton
PS obviously these are home systems; in a real environment, I'd only be sharing out filesystems with user or application data, and not local system filesystems! But since it's just me, I somewhat trust myself not to shoot myself in the foot. -- This message posted from opensolaris.org

[zfs-discuss] Mirroring a zpool

2010-10-28 Thread SR
I have a raidz2 zpool which I would like to create a mirror of. Is it possible to create a mirror of a zpool? I know I can create multi way mirrors of vdevs, do zfs/send receive etc.. to mirror data. But can I create a mirror at the zpool level? Thanks SR -- This message posted from

Re: [zfs-discuss] Mirroring a zpool

2010-10-28 Thread Cindy Swearingen
Hi SR, You can create a mirrored storage pool, but you can't mirror an existing raidz2 pool nor can you convert a raidz2 pool to a mirrored pool. You would need to copy the data from the existing pool, destroy the raidz2 pool, and create a mirrored storage pool. Cindy On 10/28/10 11:19, SR

Re: [zfs-discuss] zil behavior

2010-10-28 Thread Edward Ned Harvey
___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Rob Cohen
I have a couple drive enclosures: 15x 450gb 15krpm SAS 15x 600gb 15krpm SAS I'd like to set them up like RAID10. Previously, I was using two hardware RAID10 volumes, with the 15th drive as a hot spare, in each enclosure. Using ZFS, it could be nice to make them a single volume, so that I could

Re: [zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Ian Collins
On 10/29/10 09:40 AM, Rob Cohen wrote: I have a couple drive enclosures: 15x 450gb 15krpm SAS 15x 600gb 15krpm SAS I'd like to set them up like RAID10. Previously, I was using two hardware RAID10 volumes, with the 15th drive as a hot spare, in each enclosure. Using ZFS, it could be nice to

Re: [zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Rob Cohen
Thanks, Ian. If I understand correctly, the performance would then drop to the same level as if I set them up as separate volumes in the first place. So, I get double the performance for 75% of my data, and equal performance for 25% of my data, and my L2ARC will adapt to my working set across

Re: [zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Roy Sigurd Karlsbakk
If I understand correctly, the performance would then drop to the same level as if I set them up as separate volumes in the first place. So, I get double the performance for 75% of my data, and equal performance for 25% of my data, and my L2ARC will adapt to my working set across both