Cool, I'll try the tool and for good measure the data I
posted was sequential access (from logical point of view).
As for the physical layout, Idon't know, it's quite
possible that ZFS has layed out all blocks sequentially on
the physical side; so certainly this is not a good
Gregory Shaw writes:
I really like the below idea:
- the ability to defragment a file 'live'.
I can see instances where that could be very useful. For instance,
if you have multiple LUNs (or spindles, whatever) using ZFS, you
could re-optimize large files to spread
Rich, correct me if I'm wrong, but here's the scenario I was thinking
of:
- A large file is created.
- Over time, the file grows and shrinks.
The anticipated layout on disk due to this is that extents are
allocated as the file changes. The extents may or may not be on
multiple spindles.
The problem I see with sequential access jump all over the place is
that this increases the utilization of the disks -
over the years disks have become even faster for sequential access,
whereas random access (as they have
to move the actuator) has not improved at the same pace - this is what
Jeff Bonwick writes:
Are you saying that copy-on-write doesn't apply for mmap changes, but
only file re-writes? I don't think that gels with anything else I
know about ZFS.
No, you're correct -- everything is copy-on-write.
Maybe the confusion comes from:
mmap
Tao Chen writes:
On 5/11/06, Peter Rival [EMAIL PROTECTED] wrote:
Richard Elling wrote:
Oracle will zero-fill the tablespace with 128kByte iops -- it is not
sparse. I've got a scar. Has this changed in the past few years?
Multiple parallel tablespace creates is usually a big
On 5/12/06, Roch Bourbonnais - Performance Engineering
[EMAIL PROTECTED] wrote:
From: Gregory Shaw [EMAIL PROTECTED]
Regarding directio and quickio, is there a way with ZFS to skip the
system buffer cache? I've seen big benefits for using directio when
the data files have been
Roch Bourbonnais - Performance Engineering wrote:
Tao Chen writes:
On 5/12/06, Roch Bourbonnais - Performance Engineering
[EMAIL PROTECTED] wrote:
From: Gregory Shaw [EMAIL PROTECTED]
Regarding directio and quickio, is there a way with ZFS to skip the
system buffer cache?
You could start with the ARC paper, Megiddo/Modha FAST'03
conference. ZFS uses a variation of that. It's an interesting
read.
-r
Franz Haberhauer writes:
Gregory Shaw wrote On 05/11/06 21:15,:
Regarding directio and quickio, is there a way with ZFS to skip the
system buffer cache?
Franz Haberhauer writes:
'ZFS optimizes random writes versus potential sequential reads.'
This remark focused on the allocation policy during writes,
not the readahead that occurs during reads.
Data that are rewritten randomly but in place in a sequential,
contiguos file (like a
]
To: Mike Gerdts [EMAIL PROTECTED]
Cc: ZFS filesystem discussion list zfs-discuss@opensolaris.org,
[EMAIL PROTECTED]
Subject: Re: [zfs-discuss] ZFS and databases
Date: Thu, 11 May 2006 13:15:48 -0600
Regarding directio and quickio, is there a way with ZFS to skip the
system buffer cache
I really like the below idea:
- the ability to defragment a file 'live'.
I can see instances where that could be very useful. For instance,
if you have multiple LUNs (or spindles, whatever) using ZFS, you
could re-optimize large files to spread the chunks across as many
spindles
- Description of why I don't need directio, quickio, or ODM.
The 2 main benefits that cames out of using directio was
reducing memory consumption by avoiding the page cache AND
bypassing the UFS single writer behavior.
ZFS does not have the single writer lock.
As for memory, the UFS code
A couple of points/additions with regard to oracle in particular:
When talking about large database installations, copy-on-write may
or may not apply. The files are never completely rewritten, only
changed internally via mmap(). When you lay down your database, you
will generally
On Thu, 2006-05-11 at 10:27 -0700, Richard Elling wrote:
On Thu, 2006-05-11 at 10:31 -0600, Gregory Shaw wrote:
A couple of points/additions with regard to oracle in particular:
When talking about large database installations, copy-on-write may
or may not apply. The files are
On 5/11/06, Peter Rival [EMAIL PROTECTED] wrote:
Richard Elling wrote:
Oracle will zero-fill the tablespace with 128kByte iops -- it is not
sparse. I've got a scar. Has this changed in the past few years?
Multiple parallel tablespace creates is usually a big pain point for
filesystem /
Regarding directio and quickio, is there a way with ZFS to skip the
system buffer cache? I've seen big benefits for using directio when
the data files have been segregated from the log files.
Having the system compete with the DB for read-ahead results in
double work.
On May 10, 2006,
Are you saying that copy-on-write doesn't apply for mmap changes, but
only file re-writes? I don't think that gels with anything else I
know about ZFS.
No, you're correct -- everything is copy-on-write.
Jeff
___
zfs-discuss mailing list
One question that has come up a number of times when I've been
speaking with people (read: evangelizing :) ) about ZFS is about
database storage. In conventional use storage has separated redo logs
from table space, on a spindle basis.
I'm not a database expert but I believe the reasons
19 matches
Mail list logo