you can find the ZFS on-disk spec at:
http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
I don't know of any way to produce snapshots at periodic intervals
other than shell scripts (or a cron job), but the creation and deletion
of snapshots at command level is fairly
Hi Bob ... as richard has mentioned, allocation to vdevs
is done in a fixed sized chunk (richard specs 1MB, but I
remember a 512KB number from the original spec, but this
is not very important), and the allocation algorithm is
basically doing load balancing.
for your non-raid pool, this chunk
On my own system, when a new file is
written, the write block size does not make
a significant difference to the write speed
Yes, I've observed the same result ... when a new file is being written
sequentially, the file data and newly constructed meta-data can be
built in cache and written
I'm using a FC flash drive as a cache device to one of my pools:
zpool add pool-name cache device-name
and I'm running random IO tests to assess performance on a
snv-78 x86 system
I have a set of threads each doing random reads to about 25% of
its own, previously written, large
Thanks Marion and richard,
but I've run these tests with much larger data sets
and have never had this kind of problem when no
cache device was involved
In fact, if I remove the SSD cache device from my
pool and run the tests, they seem to run with no issues
(except for some reduced performance
Thanks to Kyle, richard and Eric
In dealing with this problem, I realize now that I could have
saved myself a lot of grief if I had simply used the replace
command and substituted some other drive for my flash
drive before I removed it
I think that this point is critical for anyone who finds
I have a question that is related to this topic: Why
is there only a (tunable) 5 second threshold and not
also an additional threshold for the buffer size
(e.g. 50MB)?
Sometimes I see my system writing huge amounts of
data to a zfs, but the disks staying idle for 5
seconds, although the
But is seems that when we're talking about full block
writes (such as
sequential file writes) ZFS could do a bit better.
And as long as there is bandwidth left to the disk
and the controllers, it
is difficult to argue that the work is redundant. If
it's free in that
sense, it doesn't
This is a re-post of this issue ... I didn't get any replies to the previous
post of 12/27 ... I'm hoping someone is back from holiday
who may have some insight into this problem ... Bill
when I remove a separate zil disk from a pool, the pool continues to function,
logging synchronous writes to
file system journals may support a variety of availability models, ranging from
simple support for fast recovery (return to consistency) with possible data
loss, to those that attempt to support synchronous write semantics with no data
loss on failure, along with fast recovery
the simpler
when I remove a separate zil disk from a pool, the pool continues to function,
logging synchronous writes to the disks in the pool. Status shows that the log
disk has been removed, and everything seems to work fine until I export the
pool.
After the pool has been exported (long after the log
this system for extensive testing with ZFS and have had no
problems installing snv-68, 69 or 70, but I'm having this problem with snv-76
any information regarding this problem or a potential workaround would be
appreciated
Thx ... bill moloney
This message posted from opensolaris.org
I have nv-63 installed on a Dell Precision 670 (dual Intel p4s) using zfs with
no problems
when I attempt to start to install nv-69 from CD #1, just after the Copyright
notice and Use is subject to license terms prints to the screen (when device
discovery usually begins), my system panics and
using hyperterm, I captured the panic message as:
SunOS Release 5.11 Version snv_69 32-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
panic[cpu0]/thread=fec1ede0: Can't handle mwait size 0
fec37e70 unix:mach_alloc_mwait+72 (fec2006c)
Thanks all for the details on this bug, looks like nv-70 should work for me
when the drop is available
I've been using an older P3 based server to test the new separate ZIL device
feature that became available in nv-68, using a FC flash drive as a log device
outside the zpool itself
I wanted
I've spent some time searching, and I apologize if I've missed this somewhere,
but in testing ZVOL write performance I cannot see any noticeable difference
between opening a ZVOL with or without O_DSYNC.
Does the O_DSYNC flag have any actual influence on ZVOL writes ?
For ZVOLS that I have
this is not a problem we're trying to solve, but part of a characterization
study of the zfs implementation ... we're currently using the default 8KB
blocksize for our zvol deployment, and we're performing tests using write block
sizes as small as 4KB and as large as 1MB as previously described
writes to ZFS objects have significant data and meta-data implications, based
on the zfs copy-on write implementation ... as data is written into a file
object, for example, this update must eventually be written to a new location
on physical disk, and all of the meta-data (from the uberblock
Thanks for the input Darren, but I'm still confused about DNODE atomicity ...
it's difficult to imagine that a change that is made anyplace in the zpool
would require copy operations all the way back up to the uberblock (e.g. if
some single file in one of many file systems in a zpool was
ZFS documentation lists snapshot limits on any single file system in a pool at
2**48 snaps, and that seems to logically imply that a snap on a file system
does not require an update to the pool’s currently active uberblock. That is
to say, that if we take a snapshot of a file system in a pool,
The ZFS On-Disk specification and other ZFS documentation describe the labeling
scheme used for the vdevs that comprise a ZFS pool. A label entry contains,
among other things, an array of uberblocks, one of which will point to the
active object set of the pool it is a part of at a given
21 matches
Mail list logo