Re: [zfs-discuss] overhead of snapshot operations

2008-03-21 Thread Bill Moloney
you can find the ZFS on-disk spec at: http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf I don't know of any way to produce snapshots at periodic intervals other than shell scripts (or a cron job), but the creation and deletion of snapshots at command level is fairly

Re: [zfs-discuss] ZFS I/O algorithms

2008-03-19 Thread Bill Moloney
Hi Bob ... as richard has mentioned, allocation to vdevs is done in a fixed sized chunk (richard specs 1MB, but I remember a 512KB number from the original spec, but this is not very important), and the allocation algorithm is basically doing load balancing. for your non-raid pool, this chunk

Re: [zfs-discuss] ZFS I/O algorithms

2008-03-19 Thread Bill Moloney
On my own system, when a new file is written, the write block size does not make a significant difference to the write speed Yes, I've observed the same result ... when a new file is being written sequentially, the file data and newly constructed meta-data can be built in cache and written

[zfs-discuss] SSD cache device hangs ZFS

2008-01-17 Thread Bill Moloney
I'm using a FC flash drive as a cache device to one of my pools: zpool add pool-name cache device-name and I'm running random IO tests to assess performance on a snv-78 x86 system I have a set of threads each doing random reads to about 25% of its own, previously written, large

Re: [zfs-discuss] SSD cache device hangs ZFS

2008-01-17 Thread Bill Moloney
Thanks Marion and richard, but I've run these tests with much larger data sets and have never had this kind of problem when no cache device was involved In fact, if I remove the SSD cache device from my pool and run the tests, they seem to run with no issues (except for some reduced performance

Re: [zfs-discuss] removing a separate zil device

2008-01-09 Thread Bill Moloney
Thanks to Kyle, richard and Eric In dealing with this problem, I realize now that I could have saved myself a lot of grief if I had simply used the replace command and substituted some other drive for my flash drive before I removed it I think that this point is critical for anyone who finds

Re: [zfs-discuss] Intent logs vs Journaling

2008-01-08 Thread Bill Moloney
I have a question that is related to this topic: Why is there only a (tunable) 5 second threshold and not also an additional threshold for the buffer size (e.g. 50MB)? Sometimes I see my system writing huge amounts of data to a zfs, but the disks staying idle for 5 seconds, although the

Re: [zfs-discuss] Intent logs vs Journaling

2008-01-08 Thread Bill Moloney
But is seems that when we're talking about full block writes (such as sequential file writes) ZFS could do a bit better. And as long as there is bandwidth left to the disk and the controllers, it is difficult to argue that the work is redundant. If it's free in that sense, it doesn't

[zfs-discuss] removing a separate zil device

2008-01-07 Thread Bill Moloney
This is a re-post of this issue ... I didn't get any replies to the previous post of 12/27 ... I'm hoping someone is back from holiday who may have some insight into this problem ... Bill when I remove a separate zil disk from a pool, the pool continues to function, logging synchronous writes to

Re: [zfs-discuss] Intent logs vs Journaling

2008-01-07 Thread Bill Moloney
file system journals may support a variety of availability models, ranging from simple support for fast recovery (return to consistency) with possible data loss, to those that attempt to support synchronous write semantics with no data loss on failure, along with fast recovery the simpler

[zfs-discuss] separate zil removal

2007-12-27 Thread Bill Moloney
when I remove a separate zil disk from a pool, the pool continues to function, logging synchronous writes to the disks in the pool. Status shows that the log disk has been removed, and everything seems to work fine until I export the pool. After the pool has been exported (long after the log

[zfs-discuss] snv-76 panics on installation

2007-11-20 Thread Bill Moloney
this system for extensive testing with ZFS and have had no problems installing snv-68, 69 or 70, but I'm having this problem with snv-76 any information regarding this problem or a potential workaround would be appreciated Thx ... bill moloney This message posted from opensolaris.org

[zfs-discuss] nv-69 install panics dell precision 670

2007-08-14 Thread Bill Moloney
I have nv-63 installed on a Dell Precision 670 (dual Intel p4s) using zfs with no problems when I attempt to start to install nv-69 from CD #1, just after the Copyright notice and Use is subject to license terms prints to the screen (when device discovery usually begins), my system panics and

Re: [zfs-discuss] nv-69 install panics dell precision 670

2007-08-14 Thread Bill Moloney
using hyperterm, I captured the panic message as: SunOS Release 5.11 Version snv_69 32-bit Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. panic[cpu0]/thread=fec1ede0: Can't handle mwait size 0 fec37e70 unix:mach_alloc_mwait+72 (fec2006c)

Re: [zfs-discuss] nv-69 install panics dell precision 670

2007-08-14 Thread Bill Moloney
Thanks all for the details on this bug, looks like nv-70 should work for me when the drop is available I've been using an older P3 based server to test the new separate ZIL device feature that became available in nv-68, using a FC flash drive as a log device outside the zpool itself I wanted

[zfs-discuss] ZVOLs and O_DSYNC, fsync() behavior

2007-06-25 Thread Bill Moloney
I've spent some time searching, and I apologize if I've missed this somewhere, but in testing ZVOL write performance I cannot see any noticeable difference between opening a ZVOL with or without O_DSYNC. Does the O_DSYNC flag have any actual influence on ZVOL writes ? For ZVOLS that I have

[zfs-discuss] Re: Re[2]: Re: How does ZFS write data to disks?

2007-05-17 Thread Bill Moloney
this is not a problem we're trying to solve, but part of a characterization study of the zfs implementation ... we're currently using the default 8KB blocksize for our zvol deployment, and we're performing tests using write block sizes as small as 4KB and as large as 1MB as previously described

[zfs-discuss] Re: How does ZFS write data to disks?

2007-05-16 Thread Bill Moloney
writes to ZFS objects have significant data and meta-data implications, based on the zfs copy-on write implementation ... as data is written into a file object, for example, this update must eventually be written to a new location on physical disk, and all of the meta-data (from the uberblock

[zfs-discuss] Re: The ZFS MOS and how DNODES are stored

2007-02-07 Thread Bill Moloney
Thanks for the input Darren, but I'm still confused about DNODE atomicity ... it's difficult to imagine that a change that is made anyplace in the zpool would require copy operations all the way back up to the uberblock (e.g. if some single file in one of many file systems in a zpool was

[zfs-discuss] The ZFS MOS and how DNODES are stored

2007-02-06 Thread Bill Moloney
ZFS documentation lists snapshot limits on any single file system in a pool at 2**48 snaps, and that seems to logically imply that a snap on a file system does not require an update to the pool’s currently active uberblock. That is to say, that if we take a snapshot of a file system in a pool,

[zfs-discuss] ZFS limits on zpool snapshots

2007-02-01 Thread Bill Moloney
The ZFS On-Disk specification and other ZFS documentation describe the labeling scheme used for the vdevs that comprise a ZFS pool. A label entry contains, among other things, an array of uberblocks, one of which will point to the active object set of the pool it is a part of at a given