)...
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-03-21 22:53, Richard Elling wrote:
...
This is why a single
vdev's random-read performance is equivalent to the random-read
performance of
a single drive.
It is not as bad as that. The actual worst case number for a HDD with
zfs_vdev_max_pending
of one is:
average IOPS * ((D+P) / D)
(not shared with other files' BPs), inflated by ditto
copies=2 and raidz/mirror redundancy.
Right/wrong?
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
in illumos, their known-good HCL might be
quite relevant for OpenIndiana users in general, I think :)
Good luck, really!
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,
before going down the lock path…
This assume that most or all of CPU utilization is %sys. If it's %usr, we take
a different approach.
Thanks
/jim
On Mar 25, 2012, at 1:29 AM, Aubrey Li wrote:
Hi,
I'm migrating a webserver(apache+php) from RHEL to solaris. During the
stress testing
heard that VMWare has some smallish limit on the number
of NFS connections, but 30 should be bearable...
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
- if it is normal or not).
Also I'm not sure if tmpfs gets the benefits of caching,
and ZFS ARC cache can consume lots of RAM and thus push
tmpfs out to swap.
As a random guess, try pointing PHP tmp directory to
/var/tmp (backed by zfs) and see if any behaviors change?
Good luck,
//Jim
THE PROBLEM - Linux is 15% sys, 55% usr,
Solaris is 30% sys, 70% usr, running the same workload,
doing the same amount of work. delivering the same level
of performance. Please validate that problem statement.
On Mar 25, 2012, at 9:51 PM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 4:18 AM, Jim
As a random guess, try pointing PHP tmp directory to
/var/tmp (backed by zfs) and see if any behaviors change?
Good luck,
//Jim
Thanks for your suggestions. Actually the default PHP tmp directory
was /var/tmp, and I changed /var/tmp to /tmp. This reduced zfs
root lock contention
stacktrace should tell you
in which functions you should start looking...
Good luck,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the stats took zdb at
least 40 minutes.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-04-05 16:04, Jim Klimov написал:
2012-04-04 23:27, Jan-Aage Frydenbø-Bruvoll wrote:
Which OS and release?
This is OpenIndiana oi_148, ZFS pool version 28.
There was a bug in some releases circa 2010 that you might be
hitting. It is
harmless, but annoying.
Ok - what bug is this, how
into raId and can do miracles
with SATA disks. Reality has shown to many of us that
many SATA implementations existing in the wild should
be avoided... so we're back to good vendors' higher end
expensive SATAs or better yet SAS drives. Not inexpensive
anymore again :(
Thanks,
//Jim
on the
project roadmap)?
Just a thought...
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-04-17 5:15, Richard Elling wrote:
For the archives...
Write-back cache enablement is toxic for file systems that do not issue
cache flush commands, such as Solaris' UFS. In the early days of ZFS,
on Solaris 10 or before ZFS was bootable on OpenSolaris, it was not
uncommon to have ZFS and
2012-04-17 14:47, Matt Keenan wrote:
- or is it possible that one of the devices being a USB device is
causing the failure ? I don't know.
Might be, I've got little experience with those beside LiveUSB
imagery ;)
My reason for splitting the pool was so I could attach the clean USB
rpool to
- impressive and interesting,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
got legally leaked into
Linux, and if they were there, then they might be legally
included into other ZFS source code projects.
I hope this subject is closed for now ;( without personal
gripes ;)
//Jim
___
zfs-discuss mailing list
zfs-discuss
is not there, is it a worthy RFE, maybe for GSoC?
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
If only ZFS could queue scrubbing reads more linearly... ;)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 2012-04-26 2:20, Ian Collins wrote:
On 04/26/12 09:54 AM, Bob Friesenhahn wrote:
On Wed, 25 Apr 2012, Rich Teer wrote:
Perhaps I'm being overly simplistic, but in this scenario, what would
prevent
one from having, on a single file server, /exports/nodes/node[0-15],
and then
having each node
to access
it. Their actual worksets would be stored locally in the
cachefs backing stores on each workstation, and not
abuse networking traffic and the fileserver until there
are some writes to be replicated into central storage.
They would have approximately one common share to mount ;)
//Jim
operations budget
(and further planning, etc.) has only been hit for 1Tb.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be sparse, compressable, and/or not unique), but
in the end that's unpredictable from the start.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
into rpool), destroy opt, relabel the solaris slices
with format, expand rpool, create a new opt. You should back it
up anyway before such dangerous experiments
But for the sheer excitement of the experiment, you can give
the dd-series a try, and tell us how it goes
HTH,
//Jim
mean heavy fragmentation and lots of random small IOs...
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
zfs_resilver_min_time_ms/W0t2 | mdb -kw
mdb: failed to dereference symbol: unknown symbol name
Thanks for any ideas,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-05-11 17:18, Bob Friesenhahn wrote:
On Fri, 11 May 2012, Jim Klimov wrote:
Hello all,
SHORT VERSION:
What conditions can cause the reset of the resilvering
process? My lost-and-found disk can't get back into the
pool because of resilvers restarting...
I recall that with sufficiently
2012-05-11 17:18, Bob Friesenhahn написал:
On Fri, 11 May 2012, Jim Klimov wrote:
Hello all,
SHORT VERSION:
What conditions can cause the reset of the resilvering
process? My lost-and-found disk can't get back into the
pool because of resilvers restarting...
I recall that with sufficiently
= 58650
[user daemon on thumper]
2012-05-12.02:45:56 [internal snapshot txg:91071280] dataset = 58652
[user jim on thumper]
2012-05-12.02:46:15 [internal snapshot txg:91071283] dataset = 58654
[user daemon on thumper]
2012-05-12.02:53:01 [internal pool scrub done txg:91071298] complete=0
[user
2012-05-12 4:26, Jim Klimov wrote:
Wonder if things would get better or worse if I kick one of the
drives (i.e. hotspare c5t6d0) out of the equation:
raidz1 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
spare ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0 6.72G resilvered
c5t6d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
c6t5d0
2012-05-11 14:22, Jim Klimov wrote:
What conditions can cause the reset of the resilvering
process? My lost-and-found disk can't get back into the
pool because of resilvers restarting...
FOLLOW-UP AND NEW QUESTIONS
Here is a new piece of evidence - I've finally got something
out of fmdump
2012-05-12 15:52, Jim Klimov wrote:
2012-05-11 14:22, Jim Klimov wrote:
What conditions can cause the reset of the resilvering
process? My lost-and-found disk can't get back into the
pool because of resilvers restarting...
Guess I must assume that the disk is dying indeed, losing
connection
Thanks for staying tuned! ;)
2012-05-12 18:34, Richard Elling wrote:
On May 12, 2012, at 4:52 AM, Jim Klimov wrote:
2012-05-11 14:22, Jim Klimov wrote:
What conditions can cause the reset of the resilvering
process? My lost-and-found disk can't get back into the
pool because of resilvers
2012-05-12 7:01, Jim Klimov wrote:
Overall the applied question is whether the disk will
make it back into the live pool (ultimately with no
continuous resilvering), and how fast that can be done -
I don't want to risk the big pool with nonredundant
arrays for too long.
Here lies another
detection on POST (I'll test tonight) or
these big disks won't work in X4500, period?
[1]
http://code.google.com/p/solaris-parted/downloads/detail?name=solaris-parted-0.2.tar.gzcan=2q=
Gotta run now, will ask more in the evening :)
Thanks for now,
//Jim
, check! ;}
2012-05-15 13:41, Jim Klimov wrote:
Hello all, I'd like some practical advice on migration of a
Sun Fire X4500 (Thumper) from aging data disks to a set of
newer disks. Some questions below are my own, others are
passed from the customer and I may consider not all of them
sane - but must ask
reasoning should apply to
other similar methods though, like iSCSI from remote
storage, or lofi-devices, or SVM as I thought of (ab)using
in this migration.
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
2012-05-16 13:30, Joerg Schilling написал:
Jim Klimovjimkli...@cos.ru wrote:
We know that large redundancy is highly recommended for
big HDDs, so in-place autoexpansion of the raidz1 pool
onto 3Tb disks is out of the question.
Before I started to use my thumper, I reconfigured it to use
for no
benefit to the buyer.
So this method was ruled out for this situation.
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be more performant and have more
RAM, I expect that this Thumper would be the backup box
for a new server, ultimately.
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
stuff into the new test pools to see if any
conflicts arise in snv_117's support of the disk size.
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
with no
deletions so far is oh-so-good! ;)
2012-05-17 1:21, Jim Klimov wrote:
2012-05-15 19:17, casper@oracle.com wrote:
Your old release of Solaris (nearly three years old) doesn't support
disks over 2TB, I would think.
(A 3TB is 3E12, the 2TB limit is 2^41 and the difference is around 800Gb
or later (oi_151a3?)
Perhaps, some known pool corruption issues or poor data
layouts in older ZFS software releases?..
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-05-18 1:39, Jim Klimov написал:
A small follow-up on my tests, just in case readers are
interested in some numbers: the UltraStar 3Tb disk got
filled up by a semi-random selection of data from our old
pool in 24 hours sharp
One more number: the smaller pool completed its scrub in
57
recovery
windows when resilvering disks.
Q4: I wonder if similar (equivalent) solutions are already
in place and did not help much? ;)
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
on that below :)
2012-05-18 15:30, Daniel Carosone wrote:
On Fri, May 18, 2012 at 03:05:09AM +0400, Jim Klimov wrote:
While waiting for that resilver to complete last week,
I caught myself wondering how the resilvers (are supposed
to) work in ZFS?
The devil finds work for idle hands
2012-05-18 19:08, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
I'm reading the ZFS on-disk spec, and I get the idea that there's an
uberblock pointing to a self-balancing tree (some say b-tree, some say
,
mount, umount, reenable zoned).
Hope this helps,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
zfs developers to make a POC? ;)
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-05-22 7:30, Daniel Carosone wrote:
On Mon, May 21, 2012 at 09:18:03PM -0500, Bob Friesenhahn wrote:
On Mon, 21 May 2012, Jim Klimov wrote:
This is so far a relatively raw idea and I've probably missed
something. Do you think it is worth pursuing and asking some
zfs developers to make
counts anyway (if no new problems are
found)
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
this is functionally
identical. (At least, would be - if it were part of a supported
procedure as I suggest).
Thanks,
//Jim Klimov
PS: I pondered for a while if I should make up an argument that
on a dying disk mechanics, lots of random IO (resilver) instead
of sequential IO (DD) would cause
the incomplete resilver made me a practical experiment
of the idea.
The failure data does not support your hypothesis.
Ok, then my made-up and dismissed argument does not stand ;)
Thanks for the discussion,
//Jim Klimov
___
zfs-discuss mailing list
zfs
for preventive regular
scrubs)...
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that a pool with an error is exposed to possible
fatal errors (due to double-failures with single-protection).
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-05-25 21:45, Sašo Kiselkov wrote:
On 05/25/2012 07:35 PM, Jim Klimov wrote:
Sorry I can't comment on MPxIO, except that I thought zfs could by
itself discern two paths to the same drive, if only to protect
against double-importing the disk into pool.
Unfortunately, it isn't the same
2012-05-26 1:07, Richard Elling wrote:
On May 25, 2012, at 1:53 PM, zfs user wrote:
The man page seems to not mention the critical part of the FMA msg
that OP is worried about.
OP said that his motivation for clearing the errors and fearing the
degraded state was because he feared this:
, expiring ARC data pages and actually claiming the
RAM for the application... Right? ;)
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, when you have
much RAM dedicated to caching. Hmmm... did you use dedup in those
tests?- that is another source of performance degradation on smaller
machines (under tens of GBs of RAM).
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
...
Now, waiting for experts to chime in on whatever I missed ;)
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
more valid (making a
copy of old data upon a new write), and if any vendors actually
did that procedure outlined above?
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I can't help but be curious about something, which perhaps you verified but
did not post.
What the data here shows is;
- CPU 31 is buried in the kernel (100% sys).
- CPU 31 is handling a moderate-to-high rate of xcalls.
What the data does not prove empirically is that the 100% sys time of
CPU
better ideas, perhaps someone had same experiences?
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
architectural choices, components and their specs.
HTH,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, presence of pool activity would likely delay
the scrub completion time, perhaps even more noticeably.
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
such apparent bottlenecks. But people who
construct their own storage should know of (and try to avoid)
such possible problem-makers ;)
Thanks, Roch,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
to affect
these CPUs either...
I wonder if creating a CPU set not assigned to any active user,
and setting that CPU set to process (networking) interrupts,
would work or help (see psrset, psradm)?
My 2c,
//Jim Klimov
___
zfs-discuss mailing list
zfs
So try unbinding the mac threads; it may help you here.
How do I do that? All I can find on interrupt fencing and the like is to
simply set certain processors to no-intr, which moves all of the
interrupts and it doesn't prevent the xcall storm choosing to affect
these CPUs either…
In
(tens
of GBs for moderate-sized pools of tens of TB).
Your box seems to have a 12Tb pool with just a little bit
used, yet already the shortage of RAM is well seen...
Hope this helps (understanding at least),
//Jim Klimov
___
zfs-discuss mailing list
zfs
new rpool
(and data pool if you've made one), installgrub onto the second
disk - and you're done.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-06-15 17:18, Jim Klimov wrote:
7) If you're on live media, try to rename the new rpool2 to
become rpool, i.e.:
# zpool export rpool2
# zpool export rpool
# zpool import -N rpool rpool2
# zpool export rpool
Ooops, bad typo in third line; should be:
# zpool export
)? Or if the
drive lies, saying its sectors are 512b while they physically
are 4KB - it is undetectable except by reading vendor specs?
Thanks,
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
haven't browsed the zfs on-disk spec, it may be also
helpful (though outdated in regard to current features):
*
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/ondiskformat0822.pdf
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs
- according to datasheets on site.
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
(uberblocks with newer
TXG numbers are, AFAIK, explicitly invalidated (zeroed out)).
I don't know how/if rollbacks work with read-only imports, if
they allow to inspect a pool at TXG number N without forfeiting
its newer changes.
HTH,
//Jim Klimov
it
manually (if you only actively use this disk for one or more ZFS
pools - which play with caching nicely).
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-06-21 1:58, Richard Elling wrote:
On Jun 20, 2012, at 4:08 AM, Jim Klimov wrote:
Also by default if you don't give the whole drive to ZFS, its cache
may be disabled upon pool import and you may have to reenable it
The behaviour is to attempt to enable the disk's write cache if ZFS has
?
Regarding the zfs-auto-snapshot, it is possible to install the old
scripted package from OpenSolaris onto Solaris 10 at least; I did
not have much experience with newer releases yet (timesliderd) so
can't help better.
HTH,
//Jim Klimov
___
zfs-discuss mailing
some other services and/or files (/etc/iscsi, something else?)
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-06-27 1:00, Bill Pijewski wrote:
On Tue, Jun 26, 2012 at 1:47 PM, Jim Klimov jimkli...@cos.ru wrote:
1) Is COMSTAR still not-integrated with shareiscsi ZFS attributes?
Or can the pool use the attribute, and the correct (new COMSTAR)
iSCSI target daemon will fire up?
I can't speak
are
overqualified for their jobs and have lots of spare cycles -
so (de)compression has little impact on real work anyway.
Also decompression tends to be faster than compression,
because there is little to no analysis to do - only matching
compressed tags to a dictionary of original data snippets.
HTH,
//Jim
the
expectations from block-pointers, ZFS will know there are
errors or even losses.
For example, zpool scrub does just that - so you should
run that on your pool, if it is now importable to you.
Good luck, HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs
for the
second half's rewrite (if that comes soon enough), and may be
spooled to disk as a couple of 64K blocks or one 128K block
(if both changes come soon after each other - within one TXG).
HTH,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss
2012-07-20 5:11, Bob Friesenhahn wrote:
On Fri, 20 Jul 2012, Jim Klimov wrote:
Zfs data block sizes are fixed size! Only tail blocks are shorter.
This is the part I am not sure is either implied by the docs
nor confirmed by my practice. But maybe I've missed something...
This is something
2012-07-22 1:24, Bob Friesenhahn пишет:
On Sat, 21 Jul 2012, Jim Klimov wrote:
During this quick test I did not manage to craft a test which
would inflate a file in the middle without touching its other
blocks (other than using a text editor which saves the whole
file - so that is irrelevant
it properly or not) are not
all inherently evil - this emulation by itself may be of some
concern regarding performance, but not one of reliability.
Then again, firmware errors are possible in any part of the
stack, of both older and newer models ;)
HTH,
//Jim
,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Is this understanding correct? Does it apply to any generic writes,
or only to sync-heavy scenarios like databases or NFS servers?
Thanks,
//Jim Klimov
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
2012-07-29 19:50, Sašo Kiselkov wrote:
On 07/29/2012 04:07 PM, Jim Klimov wrote:
For several times now I've seen statements on this list implying
that a dedicated ZIL/SLOG device catching sync writes for the log,
also allows for more streamlined writes to the pool during normal
healthy TXG
2012-07-30 0:40, opensolarisisdeadlongliveopensolaris пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
For several times now I've seen statements on this list implying
that a dedicated ZIL/SLOG device catching sync writes
found, for some apparent reason ;)
Also, I am not sure whether bumping the copies attribute to,
say, 3 increases only the redundancy of userdata, or of
regular metadata as well.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
2012-08-01 16:22, Sašo Kiselkov пишет:
On 08/01/2012 12:04 PM, Jim Klimov wrote:
Probably DDT is also stored with 2 or 3 copies of each block,
since it is metadata. It was not in the last ZFS on-disk spec
from 2006 that I found, for some apparent reason ;)
The idea of the pun
(and rewrite both its copies now).
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-08-01 17:55, Sašo Kiselkov пишет:
On 08/01/2012 03:35 PM, opensolarisisdeadlongliveopensolaris wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Availability of the DDT is IMHO crucial to a deduped pool, so
I won't
ultimately remove an unreferenced entry, then you benefit on
writes as well - you don't take as long to find DDT entries
(or determine lack thereof) for the blocks you add or remove.
Or did I get your answer wrong? ;)
//Jim
___
zfs-discuss mailing list
, as well.
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2012-08-01 23:34, opensolarisisdeadlongliveopensolaris пишет:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Well, there is at least a couple of failure scenarios where
copies1 are good:
1) A single-disk pool, as in a laptop
ob varies similarly, for the fun of it?
//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
updates, and the new
software image is written to disk without compression).
I wonder if it is possible to augment zfs clone with an option
to replicate origin's changeable attributes (all and/or a list of
ones we want), and use this feature in beadm?
Thanks,
//Jim Klimov
501 - 600 of 753 matches
Mail list logo