with raidz2 so you aren't
necessarily going to lose any capacity depending on how you configure your
pool.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. A coarser solution would be to create a new pool where you
zfs send/zfs recv the filesystems of the old pool.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
taking a stab
as estimating the comparative benefits?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, Nov 01, 2006 at 10:05:01AM +, Ceri Davies wrote:
On Wed, Nov 01, 2006 at 01:33:33AM -0800, Adam Leventhal wrote:
Rick McNeal and I have been working on building support for sharing ZVOLs
as iSCSI targets directly into ZFS. Below is the proposal I'll be
submitting to PSARC
to be persistent. This is similar to sharing
ZFS filesystems via NFS: you can use share(1M), but it doesn't affect the
persistent properties of the dataset.
What properties are you specifically interested in modifying?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
wrote:
On 11/1/06, Adam Leventhal [EMAIL PROTECTED] wrote:
On Wed, Nov 01, 2006 at 12:18:36PM +0200, Cyril Plisko wrote:
Note again that all configuration information is stored with the
dataset.
As
with NFS shared filesystems, iSCSI targets imported on a different
system
that it could not be shared.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, but we've considered making that an option you could
set in the 'shareiscsi' property ('alias=blah' for example). The iSCSI
properties I was referring to are the private meta data for the target daemon
such as IQN.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
On Wed, Nov 01, 2006 at 09:25:26PM +0200, Cyril Plisko wrote:
On 11/1/06, Adam Leventhal [EMAIL PROTECTED] wrote:
What properties are you specifically interested in modifying?
LUN for example. How would I configure LUN via zfs command ?
You can't. Forgive my ignorance about how iSCSI
it when it was on Server A?
Clients would need to explicitly change the server they're contacting unless
that new server also took over the IP address, hostname, etc.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
, and
so set 'shareiscsi=on' for the pool.
hey adam, what's direct mean?
It's iSCSI target lingo for vanilla disk emulation.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs
mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
Thanks for all the feedback. This PSARC case was approved yesterday and
will be integrated relatively soon.
Adam
On Wed, Nov 01, 2006 at 01:33:33AM -0800, Adam Leventhal wrote:
Rick McNeal and I have been working on building support for sharing ZVOLs
as iSCSI targets directly into ZFS. Below
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
happens to the
existing data?
Yes. Nothing.
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
-- and considerably better than LZJB.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
not supported, shouldn't the header files be shipped so
people can make sense of kernel data structure types?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
or in the iSCSI target. Please file a bug.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
:
6430003 record size needs to affect zvol reservation size on RAID-Z
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
On Wed, Mar 21, 2007 at 01:36:10AM +0100, Robert Milkowski wrote:
btw: I assume that compression level will be hard coded after all,
right?
Nope. You'll be able to choose from gzip-N with N ranging from 1 to 9 just
like gzip(1).
Adam
--
Adam Leventhal, Solaris Kernel Development http
to by its shortened
column name compress.
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Mar 23, 2007 at 11:41:21AM -0700, Rich Teer wrote:
I recently integrated this fix into ON:
6536606 gzip compression for ZFS
Cool! Can you recall into which build it went?
I put it back yesterday so it will be in build 62.
Adam
--
Adam Leventhal, Solaris Kernel Development
that FC-AL, ... do better in this case
iscsi doesn't use TCP, does it? Anyway, the problem is really transport
independent.
It does use TCP. Were you thinking UDP?
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs
configuration, BUT would potentially make much less efficient use of storage.
Adam
[1] http://blogs.sun.com/bonwick/entry/raid_z
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss
On Wed, Apr 04, 2007 at 03:34:13PM +0200, Constantin Gonzalez wrote:
- RAID-Z is _very_ slow when one disk is broken.
Do you have data on this? The reconstruction should be relatively cheap
especially when compared with the initial disk access.
Adam
--
Adam Leventhal, Solaris Kernel
you collect the performance
data? I think a fair test would be to compare the performance of a fully
functional RAID-Z stripe against a one with a missing (absent) device.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
.)
There's nothing today preventing Microsoft (or Apple) from sticking ZFS
into their OS. They'd just to have to release the (minimal) diffs to
ZFS-specific files.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs
. There are outstanding problems with compression in
the ZIO pipeline that may contribute to the bursty behavior.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss
On Thu, Jun 07, 2007 at 08:38:10PM -0300, Toby Thain wrote:
When should we expect Solaris kernel under OS X? 10.6? 10.7? :-)
I'm sure Jonathan will be announcing that soon. ;-)
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
. That is, choose the compression method (if
any), and then, in effect, partition the CD for RAID-Z or mirroring to
stretch the data to fill the entire disc. It wouldn't necessarily be all
that efficient to access, but it would give you resiliency against media
errors.
Adam
--
Adam Leventhal
_synchronous_ writes on an SSD, but at a pretty steep cost.
Adam
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of Mathematics and Statistics
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Solaris Kernel Development http://blogs.sun.com/ahl
On Thu, Aug 16, 2007 at 05:20:25AM -0700, ramprakash wrote:
#zfs mount -a
1. mounts c again.
2. but not vol1.. [ ie /dev/zvol/dsk/mytank/b/c does not contain vol1
]
Is this the normal behavior or is it a bug?
That looks like a bug. Please file it.
Adam
--
Adam Leventhal
, data is
written locally and then transmitted shortly after that.
Synchronous replication obviously imposes a much larger performance hit,
but asychronous replication means you may lose data over some recent
period (but the data will always be consistent).
Adam
--
Adam Leventhal, Solaris Kernel
. Assuming relatively large blocks
written, RAID-Z and RAID-5 should be laid out on disk very similarly
resulting in similar read performance.
Did you compare the I/O characteristic of both? Was the bottleneck in
the software or the hardware?
Very interesting experiment...
Adam
--
Adam Leventhal
On Wed, Nov 07, 2007 at 01:47:04PM -0800, can you guess? wrote:
I do consider the RAID-Z design to be somewhat brain-damaged [...]
How so? In my opinion, it seems like a cure for the brain damage of RAID-5.
Adam
--
Adam Leventhal, FishWorkshttp://blogs.sun.com/ahl
not
constitute a significant drawback.
I don't really think this would be feasible given how ZFS is stratified
today, but go ahead and prove me wrong: here are the instructions for
bringing over a copy of the source code:
http://www.opensolaris.org/os/community/tools/scm
- ahl
--
Adam Leventhal
...
Adam
--
Adam Leventhal, FishWorkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
match a disk of size N with another disk of size 2N
and use RAID-Z to turn them into a single vdev. At that point it's
probably a better idea to build a striped vdev and use ditto blocks to
do
your data redundancy by setting copies=2.
Adam
--
Adam Leventhal, Fishworkshttp
could be
garnered with one using single-parity RAID as with the other using double-
parity RAID. That said, it would be a fairly uncommon scenario.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss
be a coarse-grained
solution to
your problem, work is underway to address the problematic interaction
between
scrubs and snapshots.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss
/expand_o_matic_raid_z
I'd encourage anyone interested in getting involved with ZFS development to
take a look.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
rather than later.
Well, tip you off _and_ correct the problems if possible. I believe a long-
standing RFE has been to scrub periodically in the background to ensure that
correctable problems don't turn into uncorrectable ones.
Adam
--
Adam Leventhal, Fishworkshttp
layer which would break all S10 file
systems. So in a very real sense CIFS simply cannot be backported
to S10.
However, the same arguments were made explaining the difficulty backporting
ZFS and GRUB boot to Solaris 10.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com
on this in the next few months. I made a
post to my blog that probably won't answer your questions directly, but
may help inform you about what we have in mind.
http://blogs.sun.com/ahl/entry/flash_hybrid_pools_and_future
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
an I/O request on my
enterprise system.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
of view)?
You would lose transactions, but the pool would still reflect a
consistent
state.
So is this idea completely crazy?
On the contrary; it's very clever.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs
to the x4540 so that would be required before any upgrade to the
equivalent of the Sun Storage 7210.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On Nov 11, 2008, at 10:41 AM, Brent Jones wrote:
Wish I could get my hands on a beta of this GUI...
Take a look at the VMware version that you can run on any machine:
http://www.sun.com/storage/disk_systems/unified_storage/resources.jsp
Adam
--
Adam Leventhal, Fishworks
Is this software available for people who already have thumpers?
We're considering offering an upgrade path for people with existing
thumpers. Given the feedback we've been hearing, it seems very likely
that we will. No word yet on pricing or availability.
Adam
--
Adam Leventhal, Fishworks
.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ZFS that's in OpenSolaris
today. A pool created on the appliance could potentially be imported on an
OpenSolaris system; that is, of course, not explicitly supported in the
service contract.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
. Keep an eye on blogs.sun.com/fishworks.
A little off topic: Do you know when the SSDs used in the Storage 7000 are
available for the rest of us?
I don't think the will be, but it will be possible to purchase them as
replacement parts.
Adam
--
Adam Leventhal, Fishworks
On Tue, Nov 18, 2008 at 09:09:07AM -0800, Andre Lue wrote:
Is the web interface on the appliance available for download or will it make
it to opensolaris sometime in the near future?
It's not, and it's unlikely to make it to OpenSolaris.
Adam
--
Adam Leventhal, Fishworks
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
not have
that assurance.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that would hardly be measurable given
the architecture of the Hybrid Storage Pool. Recall that unlike other
products in the same space, we get our IOPS from flash rather than from
a bazillion spindles spinning at 15,000 RPM.
Adam
--
Adam Leventhal, Fishworks http
their
drives with stuff they've bought at Fry's. Is this still covered by their
service plan or would this only be in an unsupported config?
Thanks.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs
than ZFS?
Are you telling me zfs is deficient to the point it can't handle basic
right-sizing like a 15$ sata raid adapter?
How do there $15 sata raid adapters solve the problem? The more details you
could provide the better obviously.
Adam
--
Adam Leventhal, Fishworks
complexity and little to no value? They seem
very different to me, so I suppose the answer to your question is: no I cannot
feel the irony oozing out between my lips, and yes I'm oblivious to the same.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
to use the full extent of the drives they've paid for, say,
if they're using a vendor that already provides that guarantee?
You know, sort of like you not letting people choose their raid layout...
Yes, I'm not saying it shouldn't be done. I'm asking what the right answer
might be.
Adam
--
Adam
it actually
applies.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
write performance.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
of disks
to the OS, so I really only get 14 disks for data. Correct?
That's right. We market the 7110 as either 2TB = 146GB x 14 or 4.2TB =
300GB x 14 raw capacity.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs
.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
that the parts already developed are truely enterprise-
grade.
While I don't disagree that the focus for ZFS should be ensuring
enterprise-class reliability and performance, let me assure you that
requirements are driven by the market and not by marketing.
Adam
--
Adam Leventhal, Fishworks
writes into larger chunks.
I hope that's clear; if it's not, stay tuned for the aforementioned
blog post.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Don't hear about triple-parity RAID that often:
Author: Adam Leventhal
Repository: /hg/onnv/onnv-gate
Latest revision: 17811c723fb4f9fce50616cb740a92c8f6f97651
Total changesets: 1
Log message:
6854612 triple-parity RAID-Z
http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/
009872
no way around it. Fortunately with proper
scrubbing
encountering data corruption in one stripe on three different drives is
highly unlikely.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
maybe it is
different now.
Absolutely. I was talking more or less about optimal timing. I realize that
due to the priorities within ZFS and real word loads that it can take far
longer.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
money.
That's our assessment, but it's highly dependent on the specific
characteristics of the MLC NAND itself, the SSD controller, and, of
course, the workload.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Will BP rewrite allow adding a drive to raidz1 to get raidz2? And
how is status on BP rewrite? Far away? Not started yet? Planning?
BP rewrite is an important component technology, but there's a bunch
beyond
that. It's not a high priority right now for us at Sun.
Adam
--
Adam Leventhal
be satisfying to add
another
request for it, Matt is already cranking on it as fast as he can and
more
requests for it are likely to have the opposite of the intended effect.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
Hi James,
After investigating this problem a bit I'd suggest avoiding deploying
RAID-Z
until this issue is resolved. I anticipate having it fixed in build 124.
Apologies for the inconvenience.
Adam
On Aug 28, 2009, at 8:20 PM, James Lever wrote:
On 28/08/2009, at 3:23 AM, Adam Leventhal
to the last either later today or tomorrow.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
www.eagle.co.nz
This email is confidential and may be legally privileged. If
received in error please destroy and immediately notify us.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam
/ (N-1)) is odd
therefore
K * (N-1) + K = K * N is odd
If N is even K * N cannot be odd and therefore the situation cannot
arise.
If N is odd, it is possible to satisfy (1) and (2).
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
should take a look at?
Absolutely not. That is an unrelated issue. This problem is isolated to
RAID-Z.
And good luck with the fix for build 124. Are talking days or weeks for the
fix to be available, do you think? :) --
Days or hours.
Adam
--
Adam Leventhal, Fishworks http
parity
RAID-6block-interleaved double distributed parity
raidz1 is most like RAID-5; raidz2 is most like RAID-6. There's no RAID
level that covers more than two parity disks, but raidz3 is most like RAID-6,
but with triple distributed parity.
Adam
--
Adam Leventhal, Fishworks
with a zfs
send/receive on the receiving side?
As with all property changes, new writes get the new properties. Old data
is not rewritten.
Adam
--
Adam Leventhal, Fishworks http://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs
a substantial hit in throughput moving from one to the
other.
Tim,
That all really depends on your specific system and workload. As with
any
performance related matter experimentation is vital for making your
final
decision.
Adam
--
Adam Leventhal, Fishworkshttp
see from the consistent work of Eric Schrock.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and img0b share blocks?
--
Kjetil T. Homme
Redpill Linpro AS - Changing the game
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworks
, because
clone contents are (in this scenario) just some new data?
The dedup property applies to all writes so the settings for the pool of origin
don't matter, just those on the destination pool.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
pool
happen to be 10+1 RAID-Z?
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that examines recent trends in hard
drives and makes the case for triple-parity RAID. It's at least peripherally
relevant to this conversation:
http://blogs.sun.com/ahl/entry/acm_triple_parity_raid
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
with identical parity?
You're right that a mirror is a degenerate form of raidz1, for example, but
mirrors allow for specific optimizations. While the redundancy would be the
same, the performance would not.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
?
Is it distributed evenly (1.125KB) per device?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Adam Leventhal, Fishworks
! This is great news for ZFS. I'll be very interested to
see the results members of the community can get with your device as part
of their pool. COMSTAR iSCSI performance should be dramatically improved
in particular.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ahl
1 - 100 of 109 matches
Mail list logo