bash-3.00# dd if=/dev/urandom of=/dev/dsk/c1t10d0
bs=1024 count=20480
A couple of things:
(1) When you write to /dev/dsk, rather than
/dev/rdsk, the results
are cached in memory. So the on-disk state may
have been unaltered.
That's why I also did a zpool export poolname followed by a
- Description of why I don't need directio, quickio, or ODM.
The 2 main benefits that cames out of using directio was
reducing memory consumption by avoiding the page cache AND
bypassing the UFS single writer behavior.
ZFS does not have the single writer lock.
As for memory, the UFS code
Where does the 12.5% compression rule in zio_compress_data() come from ?
Given that this is in the generic function for all compression
algorithms rather than in the implementation of lzjb I wonder where the
number comes from ?
Just curious.
--
Darren J Moffat
I think there are 2 potential issues here.
The ZFS cache or ARC manages memory for all pools on a
system but the data is not really organized per pool. So on
a pool export we don't free up buffers associated with that
pool. The memory is actually returned to the system either
when
well, here's my first pass result:
[EMAIL PROTECTED] loges1]# time tar xf /root/linux-2.2.26.tar
real114m6.662s
user0m0.049s
sys 0m1.354s
On 5/11/06, Roch Bourbonnais - Performance Engineering
[EMAIL PROTECTED] wrote:
Joe Little writes:
How did you get the average time for
This would be comparable to what live upgrade does with its sync option.
With lu, certain files get synced to the newly activated BE just prior
to booting it up. (see /etc/lu/synclist)
Let's take a filesystem which contains both static application data as
well as constantly changing files
George Wilson wrote:
This would be comparable to what live upgrade does with its sync option.
With lu, certain files get synced to the newly activated BE just prior
to booting it up. (see /etc/lu/synclist)
even in that file there are three different policies:
OVERWRITE, APPEND, PREPEND.
6370738 zfs diffs filesystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, May 11, 2006 at 03:38:59PM +0100, Darren J Moffat wrote:
What would the output of zfs diffs be ?
My original conception was:
- dnode # + changed blocks
- + some naming hints so that one could quickly find changed dnodes in
clones
I talked about this with Bill Moore and he came up
On Thu, May 11, 2006 at 11:15:12AM -0400, Bill Sommerfeld wrote:
This situation is analogous to the merge with common ancestor
operations performed on source code by most SCM systems; with a named
snapshot as the clone base, the ancestor is preserved and can easily be
retrieved.
Yes, and in
A couple of points/additions with regard to oracle in particular:
When talking about large database installations, copy-on-write may
or may not apply. The files are never completely rewritten, only
changed internally via mmap(). When you lay down your database, you
will generally
On Thu, 2006-05-11 at 10:27 -0700, Richard Elling wrote:
On Thu, 2006-05-11 at 10:31 -0600, Gregory Shaw wrote:
A couple of points/additions with regard to oracle in particular:
When talking about large database installations, copy-on-write may
or may not apply. The files are
On 5/11/06, Peter Rival [EMAIL PROTECTED] wrote:
Richard Elling wrote:
Oracle will zero-fill the tablespace with 128kByte iops -- it is not
sparse. I've got a scar. Has this changed in the past few years?
Multiple parallel tablespace creates is usually a big pain point for
filesystem /
Regarding directio and quickio, is there a way with ZFS to skip the
system buffer cache? I've seen big benefits for using directio when
the data files have been segregated from the log files.
Having the system compete with the DB for read-ahead results in
double work.
On May 10, 2006,
plan A. To mirror on iSCSI devices:
keep one server with a set of zfs file systems
with 2 (sub)mirrors each, one of the mirrors use
devices physically on remote site accessed as
iSCSI LUNs.
How does ZFS handle remote replication?
If the
Are you saying that copy-on-write doesn't apply for mmap changes, but
only file re-writes? I don't think that gels with anything else I
know about ZFS.
No, you're correct -- everything is copy-on-write.
Jeff
___
zfs-discuss mailing list
16 matches
Mail list logo