[zfs-discuss] check zfs?

2010-11-12 Thread Stephan Budach
Hi, I am having a corrupted dataset, that caused a kernel panic upon imporing/mounting the zpool/dataset. (see this thread http://opensolaris.org/jive/thread.jspa?threadID=135269tstart=0) Now, I do have number of snapshots on this dataset and I am wondering, if there's a way to check if a

Re: [zfs-discuss] couple of ZFS questions

2010-11-12 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Chad Leigh -- Shire.Net LLC 1) The ZFS box offers a single iSCSI target that exposes all the zvols as individual disks. When the FreeBSD initiator finds it, it creates a separate disk

[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-12 Thread sridhar surampudi
Hi, How I can I quiesce / freeze all writes to zfs and zpool if want to take hardware level snapshots or array snapshot of all devices under a pool ? are there any commands or ioctls or apis available ? Thanks Regards, sridhar. -- This message posted from opensolaris.org

Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-12 Thread Darren J Moffat
On 12/11/2010 13:01, sridhar surampudi wrote: How I can I quiesce / freeze all writes to zfs and zpool if want to take hardware level snapshots or array snapshot of all devices under a pool ? are there any commands or ioctls or apis available ? zpool export pool zpool import pool That is the

[zfs-discuss] ZFS doesn't notice errors in mirrored log device?

2010-11-12 Thread Alexander Skwar
Hello! I've got a Solaris 10 10/08 Sparc system and use ZFS pool version 15. I'm playing around a bit to make it break. I've created a mirrored Test pool using mirrored log devices: # zpool create Test \ mirror /dev/zvol/dsk/data/DiskNr1 /dev/zvol/dsk/data/DiskNr2 \ log mirror

Re: [zfs-discuss] ZFS doesn't notice errors in mirrored log device?

2010-11-12 Thread Victor Latushkin
On Nov 12, 2010, at 5:21 PM, Alexander Skwar wrote: Hm. Why are there no errors shown for the logs devices? You need to crash you machine while log devices are in use, then you'll see some reads on the next reboot. In use here means that system is actively writing to log devices at the time

Re: [zfs-discuss] Thin devices/reclamation with ZFS?

2010-11-12 Thread Lars Albnisson
Hi Henrik Yes I have the following concerns….. and as I haven’t done any practical tests this is only “in theory”…. 1. Reclaiming thin devices will not work, EMC have a 768K byte minimum reclaim limit (768Kbytes needs to be all zeros). And HDS have 42MB I believe, IBM and 3PAR I don’t

[zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-12 Thread Edward Ned Harvey
Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I'm in love. But for one thing. The interconnect between the head storage. 1G Ether is so cheap, but not as fast as desired. 10G ether is fast enough, but it's overkill and why is it so bloody expensive? Why is there

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-12 Thread Eugen Leitl
On Fri, Nov 12, 2010 at 10:03:08AM -0500, Edward Ned Harvey wrote: Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I'm in love. But for one thing. The interconnect between the head storage. 1G Ether is so cheap, but not as fast as desired. 10G ether is fast

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-12 Thread Kyle McDonald
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/12/2010 10:03 AM, Edward Ned Harvey wrote: Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I?m in love. But for one thing. The interconnect between the head storage. 1G Ether is so cheap, but not as fast as

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-12 Thread Tim Cook
Channeling Ethernet will not make it any faster. Each individual connection will be limited to 1gbit. iSCSI with mpxio may work, nfs will not. On Nov 12, 2010 9:26 AM, Eugen Leitl eu...@leitl.org wrote: On Fri, Nov 12, 2010 at 10:03:08AM -0500, Edward Ned Harvey wrote: Since combining ZFS

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-12 Thread Eugen Leitl
On Fri, Nov 12, 2010 at 09:34:48AM -0600, Tim Cook wrote: Channeling Ethernet will not make it any faster. Each individual connection will be limited to 1gbit. iSCSI with mpxio may work, nfs will not. Would NFSv4 as cluster system over multiple boxes work? (This question is not limited to

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-12 Thread Saxon, Will
ESX does not support LACP, only static trunking with a host-configured path selection algorithm. Look at Infiniband. Even QDR (32 Gbit) is cheaper per port than most 10GbE solutions I've seen, and SDR/DDR certainly is. If you want to connect ESX to storage directly via IB you will find some

Re: [zfs-discuss] Booting fails with `Can not read the pool label' error

2010-11-12 Thread Cindy Swearingen
Hi Rainer, I haven't seen this in a while but I wonder if you just need to set the bootfs property on your new root pool and/or reapplying the bootblocks. Can you import this pool booting from a LiveCD and to review the bootfs property value? I would also install the boot blocks on the rpool2

Re: [zfs-discuss] couple of ZFS questions

2010-11-12 Thread Chad Leigh -- Shire.Net LLC
On Nov 12, 2010, at 5:54 AM, Edward Ned Harvey wrote: Why are you sharing iscsi from nexenta to freebsd? Wouldn't it be better for nexenta to simply create zfs filesystems, and then share nfs? Much more flexible in a lot of ways. Unless your design requirements require limiting the

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-12 Thread SR
Check infiniband, the guys at anandtech/zfsbuild.com used that as well. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-12 Thread Ian Collins
On 11/13/10 04:03 AM, Edward Ned Harvey wrote: Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I’m in love. But for one thing. The interconnect between the head storage. 1G Ether is so cheap, but not as fast as desired. 10G ether is fast enough, but it’s overkill

Re: [zfs-discuss] ZFS doesn't notice errors in mirrored log device?

2010-11-12 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Alexander Skwar I've got a Solaris 10 10/08 Sparc system and use ZFS pool version 15. I'm playing around a bit to make it break. Now I write some garbage to one of the log mirror devices.