Hello,
I¹ve got VDI 3.2.1 and I¹m experiencing ZFS iscsi persistence after
rebooting the ZFS Solaris 10 (s9/10 s10x_u9wos_14a X86) server so I tried to
use NexentaOS_134f as according
to http://sun.systemnews.com/articles/145/5/Virtualization/22991, VDI 3.1.1
supports COMSTAR
However, with
On 1/30/2011 12:39 AM, Richard Elling wrote:
Hmmm, doesnt look good on any of the drives.
I'm not sure of the way BSD enumerates devices. Some clever person thought
that hiding the partition or slice would be useful. I don't find it useful.
On a Solaris
system, ZFS can show a disk
On Jan 30, 2011, at 1:37 AM, Thierry Delaitre wrote:
Hello,
I’ve got VDI 3.2.1 and I’m experiencing ZFS iscsi persistence after rebooting
the ZFS Solaris 10 (s9/10 s10x_u9wos_14a X86) server so I tried to use
NexentaOS_134f as according
to
On Jan 30, 2011, at 4:31 AM, Mike Tancsa wrote:
On 1/30/2011 12:39 AM, Richard Elling wrote:
Hmmm, doesnt look good on any of the drives.
I'm not sure of the way BSD enumerates devices. Some clever person thought
that hiding the partition or slice would be useful. I don't find it useful.
Would you recommend a particular distribution to implement a persistent
iscsi server compatible with VDI ?
Thierry.
From: Richard Elling [mailto:richard.ell...@gmail.com]
Sent: 30 January 2011 16:28
To: Thierry Delaitre
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] VDI, ZFS
On Jan 29, 2011, at 10:00 PM, Richard Elling wrote:
On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
Is there a simple way to query zfs send binary objects for basic information
such as:
1) What snapshot they represent?
2) When they where created?
3) Whether they are the result of
Is it possible to partition the global setting for the maximum ARC size
with finer grained controls? Ideally, I would like to do this on a per
zvol basis but a setting per zpool would be interesting as well?
The use case is to prioritize which zvol devices should be fully cached
in DRAM on a
- Original Message -
Is it possible to partition the global setting for the maximum ARC
size
with finer grained controls? Ideally, I would like to do this on a per
zvol basis but a setting per zpool would be interesting as well?
The use case is to prioritize which zvol devices
On 2011-Jan-28 21:37:50 +0800, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
2- When you want to restore, it's all or nothing. If a single bit is
corrupt in the data stream, the whole stream is lost.
Regarding point #2, I contend that zfs send is better than
On 2011-Jan-30 13:39:22 +0800, Richard Elling richard.ell...@gmail.com wrote:
I'm not sure of the way BSD enumerates devices. Some clever person thought
that hiding the partition or slice would be useful.
No, there's no hiding. /dev/ada0 always refers to the entire physical disk.
If it had
On Mon, Jan 31, 2011 at 3:47 AM, Peter Jeremy
peter.jer...@alcatel-lucent.com wrote:
On 2011-Jan-28 21:37:50 +0800, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
2- When you want to restore, it's all or nothing. If a single bit is
corrupt in the data stream, the
On Jan 30, 2011, at 11:19 AM, Stuart Anderson wrote:
On Jan 29, 2011, at 10:00 PM, Richard Elling wrote:
On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
Is there a simple way to query zfs send binary objects for basic
information such as:
1) What snapshot they represent?
2) When
Hi all
As I've said here on the list a few times earlier, the last on the thread 'ZFS
not usable (was ZFS Dedup question)', I've been doing some rather thorough
testing on zfs dedup, and as you can see from the posts, it wasn't very
satisfactory. The docs claim 1-2GB memory usage per terabyte
On Jan 30, 2011, at 1:09 PM, Peter Jeremy wrote:
On 2011-Jan-30 13:39:22 +0800, Richard Elling richard.ell...@gmail.com
wrote:
I'm not sure of the way BSD enumerates devices. Some clever person thought
that hiding the partition or slice would be useful.
No, there's no hiding. /dev/ada0
On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
Is it possible to partition the global setting for the maximum ARC size
with finer grained controls? Ideally, I would like to do this on a per
zvol basis but a setting per zpool would be interesting as well?
While perhaps not perfect, see
On 1/30/2011 5:26 PM, Joerg Schilling wrote:
Richard Ellingrichard.ell...@gmail.com wrote:
ufsdump is the problem, not ufsrestore. If you ufsdump an active
file system, there is no guarantee you can ufsrestore it. The only way
to guarantee this is to keep the file system quiesced during the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
The test box is a supermicro thing with a Core2duo CPU, 8 gigs of RAM, 4 gigs
of mirrored SLOG and some 150 gigs of L2ARC on 80GB x25-M drives. The
data drives are 7 2TB
I'm not sure about *docs*, but my rough estimations:
Assume 1TB of actual used storage. Assume 64K block/slab size. (Not
sure how realistic that is -- it depends totally on your data set.)
Assume 300 bytes per DDT entry.
So we have (1024^4 / 65536) * 300 = 5033164800 or about 5GB RAM for one
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
We're getting down to 10-20MB/s on
Oh, one more thing. How are you measuring the speed? Because if you have data
which is highly compressible, or highly duplicated,
From: Peter Jeremy [mailto:peter.jer...@alcatel-lucent.com]
Sent: Sunday, January 30, 2011 3:48 PM
2- When you want to restore, it's all or nothing. If a single bit is
corrupt in the data stream, the whole stream is lost.
OTOH, it renders ZFS send useless for backup or archival purposes.
On Jan 30, 2011, at 1:49 PM, Richard Elling wrote:
On Jan 30, 2011, at 11:19 AM, Stuart Anderson wrote:
On Jan 29, 2011, at 10:00 PM, Richard Elling wrote:
On Jan 29, 2011, at 5:48 PM, stuart anderson wrote:
Is there a simple way to query zfs send binary objects for basic
information
On Jan 30, 2011, at 2:29 PM, Richard Elling wrote:
On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
Is it possible to partition the global setting for the maximum ARC size
with finer grained controls? Ideally, I would like to do this on a per
zvol basis but a setting per zpool would be
22 matches
Mail list logo