In my case, it gives an error that I need at least 11 disks (which I don't)
but the point is that raidz parity does not seem to be limited to 3. Is this
not true?
RAID-Z is limited to 3 parity disks. The error message is giving you false hope
and that's a bug. If you had plugged in 11 disks
be twice as good. (I've just done some tests on the MacZFS port on my
blog for more info)
Here's a good blog comparing some ZFS compression modes in the context of the
Sun Storage 7000:
http://blogs.sun.com/dap/entry/zfs_compression
Adam
--
Adam Leventhal, Fishworkshttp
Hey Robert,
I've filed a bug to track this issue. We'll try to reproduce the problem and
evaluate the cause. Thanks for bringing this to our attention.
Adam
On Jun 24, 2010, at 2:40 AM, Robert Milkowski wrote:
On 23/06/2010 18:50, Adam Leventhal wrote:
Does it mean that for dataset used
the same number of iops.
Any idea why?
--
Robert Milkowski
http://milek.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworks
parity.
What is the total width of your raidz1 stripe?
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of resilience and
performance, no doubt. Which makes me think the pretty interface becomes an
annoyance sometimes. Let's wait for 2010.Q1 :)
As always, we welcome feedback (although zfs-discuss is not the appropriate
forum), and are eager to improve the product.
Adam
--
Adam Leventhal
Hi Any idea why zfs does not dedup files with this format ?
file /opt/XXX/XXX/data
VAX COFF executable - version 7926
With dedup enabled, ZFS will identify and remove duplicated regardless of the
data format.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
the resilvering process kills performance.
Maybe, but then it depends on how much you rely on your disks for performance.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss
. As
with any block-based caching, this device has no notion of the semantic
meaning of a given block so there's only so much intelligence it can bring
to bear on the problem.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
! This is great news for ZFS. I'll be very interested to
see the results members of the community can get with your device as part
of their pool. COMSTAR iSCSI performance should be dramatically improved
in particular.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
?
Is it distributed evenly (1.125KB) per device?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworks
with identical parity?
You're right that a mirror is a degenerate form of raidz1, for example, but
mirrors allow for specific optimizations. While the redundancy would be the
same, the performance would not.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
that examines recent trends in hard
drives and makes the case for triple-parity RAID. It's at least peripherally
relevant to this conversation:
http://blogs.sun.com/ahl/entry/acm_triple_parity_raid
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
pool
happen to be 10+1 RAID-Z?
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and img0b share blocks?
--
Kjetil T. Homme
Redpill Linpro AS - Changing the game
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworks
, because
clone contents are (in this scenario) just some new data?
The dedup property applies to all writes so the settings for the pool of origin
don't matter, just those on the destination pool.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
see from the consistent work of Eric Schrock.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
a substantial hit in throughput moving from one to the
other.
Tim,
That all really depends on your specific system and workload. As with
any
performance related matter experimentation is vital for making your
final
decision.
Adam
--
Adam Leventhal, Fishworkshttp
with a zfs
send/receive on the receiving side?
As with all property changes, new writes get the new properties. Old data
is not rewritten.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs
parity
RAID-6block-interleaved double distributed parity
raidz1 is most like RAID-5; raidz2 is most like RAID-6. There's no RAID
level that covers more than two parity disks, but raidz3 is most like RAID-6,
but with triple distributed parity.
Adam
--
Adam Leventhal, Fishworks
/ (N-1)) is odd
therefore
K * (N-1) + K = K * N is odd
If N is even K * N cannot be odd and therefore the situation cannot
arise.
If N is odd, it is possible to satisfy (1) and (2).
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
should take a look at?
Absolutely not. That is an unrelated issue. This problem is isolated to
RAID-Z.
And good luck with the fix for build 124. Are talking days or weeks for the
fix to be available, do you think? :) --
Days or hours.
Adam
--
Adam Leventhal, Fishworks http
to the last either later today or tomorrow.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
www.eagle.co.nz
This email is confidential and may be legally privileged. If
received in error please destroy and immediately notify us.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam
Hi James,
After investigating this problem a bit I'd suggest avoiding deploying
RAID-Z
until this issue is resolved. I anticipate having it fixed in build 124.
Apologies for the inconvenience.
Adam
On Aug 28, 2009, at 8:20 PM, James Lever wrote:
On 28/08/2009, at 3:23 AM, Adam Leventhal
be satisfying to add
another
request for it, Matt is already cranking on it as fast as he can and
more
requests for it are likely to have the opposite of the intended effect.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
Will BP rewrite allow adding a drive to raidz1 to get raidz2? And
how is status on BP rewrite? Far away? Not started yet? Planning?
BP rewrite is an important component technology, but there's a bunch
beyond
that. It's not a high priority right now for us at Sun.
Adam
--
Adam Leventhal
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
money.
That's our assessment, but it's highly dependent on the specific
characteristics of the MLC NAND itself, the SSD controller, and, of
course, the workload.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
maybe it is
different now.
Absolutely. I was talking more or less about optimal timing. I realize that
due to the priorities within ZFS and real word loads that it can take far
longer.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
that the parts already developed are truely enterprise-
grade.
While I don't disagree that the focus for ZFS should be ensuring
enterprise-class reliability and performance, let me assure you that
requirements are driven by the market and not by marketing.
Adam
--
Adam Leventhal, Fishworks
writes into larger chunks.
I hope that's clear; if it's not, stay tuned for the aforementioned
blog post.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Don't hear about triple-parity RAID that often:
Author: Adam Leventhal
Repository: /hg/onnv/onnv-gate
Latest revision: 17811c723fb4f9fce50616cb740a92c8f6f97651
Total changesets: 1
Log message:
6854612 triple-parity RAID-Z
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/
009872
no way around it. Fortunately with proper
scrubbing
encountering data corruption in one stripe on three different drives is
highly unlikely.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
of disks
to the OS, so I really only get 14 disks for data. Correct?
That's right. We market the 7110 as either 2TB = 146GB x 14 or 4.2TB =
300GB x 14 raw capacity.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs
.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
write performance.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that would hardly be measurable given
the architecture of the Hybrid Storage Pool. Recall that unlike other
products in the same space, we get our IOPS from flash rather than from
a bazillion spindles spinning at 15,000 RPM.
Adam
--
Adam Leventhal, Fishworks http
their
drives with stuff they've bought at Fry's. Is this still covered by their
service plan or would this only be in an unsupported config?
Thanks.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs
than ZFS?
Are you telling me zfs is deficient to the point it can't handle basic
right-sizing like a 15$ sata raid adapter?
How do there $15 sata raid adapters solve the problem? The more details you
could provide the better obviously.
Adam
--
Adam Leventhal, Fishworks
complexity and little to no value? They seem
very different to me, so I suppose the answer to your question is: no I cannot
feel the irony oozing out between my lips, and yes I'm oblivious to the same.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
to use the full extent of the drives they've paid for, say,
if they're using a vendor that already provides that guarantee?
You know, sort of like you not letting people choose their raid layout...
Yes, I'm not saying it shouldn't be done. I'm asking what the right answer
might be.
Adam
--
Adam
it actually
applies.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
not have
that assurance.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On Tue, Nov 18, 2008 at 09:09:07AM -0800, Andre Lue wrote:
Is the web interface on the appliance available for download or will it make
it to opensolaris sometime in the near future?
It's not, and it's unlikely to make it to OpenSolaris.
Adam
--
Adam Leventhal, Fishworks
ZFS that's in OpenSolaris
today. A pool created on the appliance could potentially be imported on an
OpenSolaris system; that is, of course, not explicitly supported in the
service contract.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
. Keep an eye on blogs.sun.com/fishworks.
A little off topic: Do you know when the SSDs used in the Storage 7000 are
available for the rest of us?
I don't think the will be, but it will be possible to purchase them as
replacement parts.
Adam
--
Adam Leventhal, Fishworks
.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to the x4540 so that would be required before any upgrade to the
equivalent of the Sun Storage 7210.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Nov 11, 2008, at 10:41 AM, Brent Jones wrote:
Wish I could get my hands on a beta of this GUI...
Take a look at the VMware version that you can run on any machine:
http://www.sun.com/storage/disk_systems/unified_storage/resources.jsp
Adam
--
Adam Leventhal, Fishworks
Is this software available for people who already have thumpers?
We're considering offering an upgrade path for people with existing
thumpers. Given the feedback we've been hearing, it seems very likely
that we will. No word yet on pricing or availability.
Adam
--
Adam Leventhal, Fishworks
of view)?
You would lose transactions, but the pool would still reflect a
consistent
state.
So is this idea completely crazy?
On the contrary; it's very clever.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs
/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on this in the next few months. I made a
post to my blog that probably won't answer your questions directly, but
may help inform you about what we have in mind.
http://blogs.sun.com/ahl/entry/flash_hybrid_pools_and_future
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
an I/O request on my
enterprise system.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
layer which would break all S10 file
systems. So in a very real sense CIFS simply cannot be backported
to S10.
However, the same arguments were made explaining the difficulty backporting
ZFS and GRUB boot to Solaris 10.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com
rather than later.
Well, tip you off _and_ correct the problems if possible. I believe a long-
standing RFE has been to scrub periodically in the background to ensure that
correctable problems don't turn into uncorrectable ones.
Adam
--
Adam Leventhal, Fishworkshttp
/expand_o_matic_raid_z
I'd encourage anyone interested in getting involved with ZFS development to
take a look.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
be a coarse-grained
solution to
your problem, work is underway to address the problematic interaction
between
scrubs and snapshots.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss
could be
garnered with one using single-parity RAID as with the other using double-
parity RAID. That said, it would be a fairly uncommon scenario.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss
match a disk of size N with another disk of size 2N
and use RAID-Z to turn them into a single vdev. At that point it's
probably a better idea to build a striped vdev and use ditto blocks to
do
your data redundancy by setting copies=2.
Adam
--
Adam Leventhal, Fishworkshttp
...
Adam
--
Adam Leventhal, FishWorkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
not
constitute a significant drawback.
I don't really think this would be feasible given how ZFS is stratified
today, but go ahead and prove me wrong: here are the instructions for
bringing over a copy of the source code:
http://www.opensolaris.org/os/community/tools/scm
- ahl
--
Adam Leventhal
On Wed, Nov 07, 2007 at 01:47:04PM -0800, can you guess? wrote:
I do consider the RAID-Z design to be somewhat brain-damaged [...]
How so? In my opinion, it seems like a cure for the brain damage of RAID-5.
Adam
--
Adam Leventhal, FishWorkshttp://blogs.sun.com/ahl
. Assuming relatively large blocks
written, RAID-Z and RAID-5 should be laid out on disk very similarly
resulting in similar read performance.
Did you compare the I/O characteristic of both? Was the bottleneck in
the software or the hardware?
Very interesting experiment...
Adam
--
Adam Leventhal
, data is
written locally and then transmitted shortly after that.
Synchronous replication obviously imposes a much larger performance hit,
but asychronous replication means you may lose data over some recent
period (but the data will always be consistent).
Adam
--
Adam Leventhal, Solaris Kernel
On Thu, Aug 16, 2007 at 05:20:25AM -0700, ramprakash wrote:
#zfs mount -a
1. mounts c again.
2. but not vol1.. [ ie /dev/zvol/dsk/mytank/b/c does not contain vol1
]
Is this the normal behavior or is it a bug?
That looks like a bug. Please file it.
Adam
--
Adam Leventhal
of Mathematics and Statistics
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
_synchronous_ writes on an SSD, but at a pretty steep cost.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. That is, choose the compression method (if
any), and then, in effect, partition the CD for RAID-Z or mirroring to
stretch the data to fill the entire disc. It wouldn't necessarily be all
that efficient to access, but it would give you resiliency against media
errors.
Adam
--
Adam Leventhal
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On Thu, Jun 07, 2007 at 08:38:10PM -0300, Toby Thain wrote:
When should we expect Solaris kernel under OS X? 10.6? 10.7? :-)
I'm sure Jonathan will be announcing that soon. ;-)
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss
.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. There are outstanding problems with compression in
the ZIO pipeline that may contribute to the bursty behavior.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.)
There's nothing today preventing Microsoft (or Apple) from sticking ZFS
into their OS. They'd just to have to release the (minimal) diffs to
ZFS-specific files.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs
On Wed, Apr 04, 2007 at 03:34:13PM +0200, Constantin Gonzalez wrote:
- RAID-Z is _very_ slow when one disk is broken.
Do you have data on this? The reconstruction should be relatively cheap
especially when compared with the initial disk access.
Adam
--
Adam Leventhal, Solaris Kernel
you collect the performance
data? I think a fair test would be to compare the performance of a fully
functional RAID-Z stripe against a one with a missing (absent) device.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
configuration, BUT would potentially make much less efficient use of storage.
Adam
[1] http://blogs.sun.com/bonwick/entry/raid_z
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss
to by its shortened
column name compress.
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Mar 23, 2007 at 11:41:21AM -0700, Rich Teer wrote:
I recently integrated this fix into ON:
6536606 gzip compression for ZFS
Cool! Can you recall into which build it went?
I put it back yesterday so it will be in build 62.
Adam
--
Adam Leventhal, Solaris Kernel Development
that FC-AL, ... do better in this case
iscsi doesn't use TCP, does it? Anyway, the problem is really transport
independent.
It does use TCP. Were you thinking UDP?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs
:
6430003 record size needs to affect zvol reservation size on RAID-Z
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Wed, Mar 21, 2007 at 01:36:10AM +0100, Robert Milkowski wrote:
btw: I assume that compression level will be hard coded after all,
right?
Nope. You'll be able to choose from gzip-N with N ranging from 1 to 9 just
like gzip(1).
Adam
--
Adam Leventhal, Solaris Kernel Development http
or in the iSCSI target. Please file a bug.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
not supported, shouldn't the header files be shipped so
people can make sense of kernel data structure types?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
-- and considerably better than LZJB.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
happens to the
existing data?
Yes. Nothing.
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss
mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Thanks for all the feedback. This PSARC case was approved yesterday and
will be integrated relatively soon.
Adam
On Wed, Nov 01, 2006 at 01:33:33AM -0800, Adam Leventhal wrote:
Rick McNeal and I have been working on building support for sharing ZVOLs
as iSCSI targets directly into ZFS. Below
, and
so set 'shareiscsi=on' for the pool.
hey adam, what's direct mean?
It's iSCSI target lingo for vanilla disk emulation.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs
On Wed, Nov 01, 2006 at 10:05:01AM +, Ceri Davies wrote:
On Wed, Nov 01, 2006 at 01:33:33AM -0800, Adam Leventhal wrote:
Rick McNeal and I have been working on building support for sharing ZVOLs
as iSCSI targets directly into ZFS. Below is the proposal I'll be
submitting to PSARC
to be persistent. This is similar to sharing
ZFS filesystems via NFS: you can use share(1M), but it doesn't affect the
persistent properties of the dataset.
What properties are you specifically interested in modifying?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
1 - 100 of 109 matches
Mail list logo