Hi,
I am having a corrupted dataset, that caused a kernel panic upon
imporing/mounting the zpool/dataset. (see this thread
http://opensolaris.org/jive/thread.jspa?threadID=135269tstart=0)
Now, I do have number of snapshots on this dataset and I am wondering, if
there's a way to check if a
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Chad Leigh -- Shire.Net LLC
1) The ZFS box offers a single iSCSI target that exposes all the
zvols as individual disks. When the FreeBSD initiator finds it, it
creates a separate disk
Hi,
How I can I quiesce / freeze all writes to zfs and zpool if want to take
hardware level snapshots or array snapshot of all devices under a pool ?
are there any commands or ioctls or apis available ?
Thanks Regards,
sridhar.
--
This message posted from opensolaris.org
On 12/11/2010 13:01, sridhar surampudi wrote:
How I can I quiesce / freeze all writes to zfs and zpool if want to take
hardware level snapshots or array snapshot of all devices under a pool ?
are there any commands or ioctls or apis available ?
zpool export pool
zpool import pool
That is the
Hello!
I've got a Solaris 10 10/08 Sparc system and use ZFS pool version 15. I'm
playing
around a bit to make it break.
I've created a mirrored Test pool using mirrored log devices:
# zpool create Test \
mirror /dev/zvol/dsk/data/DiskNr1 /dev/zvol/dsk/data/DiskNr2 \
log mirror
On Nov 12, 2010, at 5:21 PM, Alexander Skwar wrote:
Hm. Why are there no errors shown for the logs devices?
You need to crash you machine while log devices are in use, then you'll see
some reads on the next reboot. In use here means that system is actively
writing to log devices at the time
Hi Henrik
Yes I have the following concerns….. and as I haven’t done any practical tests
this is only “in theory”….
1.
Reclaiming thin devices will not work, EMC have a 768K byte minimum reclaim
limit (768Kbytes needs to be all zeros). And HDS have 42MB I believe, IBM and
3PAR I don’t
Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I'm
in love. But for one thing. The interconnect between the head storage.
1G Ether is so cheap, but not as fast as desired. 10G ether is fast enough,
but it's overkill and why is it so bloody expensive? Why is there
On Fri, Nov 12, 2010 at 10:03:08AM -0500, Edward Ned Harvey wrote:
Since combining ZFS storage backend, via nfs or iscsi, with ESXi heads, I'm
in love. But for one thing. The interconnect between the head storage.
1G Ether is so cheap, but not as fast as desired. 10G ether is fast
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/12/2010 10:03 AM, Edward Ned Harvey wrote:
Since combining ZFS storage backend, via nfs or iscsi, with ESXi
heads, I?m in love. But for one thing. The interconnect between
the head storage.
1G Ether is so cheap, but not as fast as
Channeling Ethernet will not make it any faster. Each individual connection
will be limited to 1gbit. iSCSI with mpxio may work, nfs will not.
On Nov 12, 2010 9:26 AM, Eugen Leitl eu...@leitl.org wrote:
On Fri, Nov 12, 2010 at 10:03:08AM -0500, Edward Ned Harvey wrote:
Since combining ZFS
On Fri, Nov 12, 2010 at 09:34:48AM -0600, Tim Cook wrote:
Channeling Ethernet will not make it any faster. Each individual connection
will be limited to 1gbit. iSCSI with mpxio may work, nfs will not.
Would NFSv4 as cluster system over multiple boxes work?
(This question is not limited to
ESX does not support LACP, only static trunking with a host-configured path
selection algorithm.
Look at Infiniband. Even QDR (32 Gbit) is cheaper per port than most 10GbE
solutions I've seen, and SDR/DDR certainly is. If you want to connect ESX to
storage directly via IB you will find some
Hi Rainer,
I haven't seen this in a while but I wonder if you just need to set the
bootfs property on your new root pool and/or reapplying the bootblocks.
Can you import this pool booting from a LiveCD and to review the
bootfs property value? I would also install the boot blocks on the
rpool2
On Nov 12, 2010, at 5:54 AM, Edward Ned Harvey wrote:
Why are you sharing iscsi from nexenta to freebsd? Wouldn't it be better
for nexenta to simply create zfs filesystems, and then share nfs? Much more
flexible in a lot of ways. Unless your design requirements require limiting
the
Check infiniband, the guys at anandtech/zfsbuild.com used that as well.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 11/13/10 04:03 AM, Edward Ned Harvey wrote:
Since combining ZFS storage backend, via nfs or iscsi, with ESXi
heads, I’m in love. But for one thing. The interconnect between the
head storage.
1G Ether is so cheap, but not as fast as desired. 10G ether is fast
enough, but it’s overkill
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Alexander Skwar
I've got a Solaris 10 10/08 Sparc system and use ZFS pool version 15. I'm
playing around a bit to make it break.
Now I write some garbage to one of the log mirror devices.
18 matches
Mail list logo