Jeff Bonwick writes:
Are you saying that copy-on-write doesn't apply for mmap changes, but
only file re-writes? I don't think that gels with anything else I
know about ZFS.
No, you're correct -- everything is copy-on-write.
Maybe the confusion comes from:
mmap
Tao Chen writes:
On 5/11/06, Peter Rival [EMAIL PROTECTED] wrote:
Richard Elling wrote:
Oracle will zero-fill the tablespace with 128kByte iops -- it is not
sparse. I've got a scar. Has this changed in the past few years?
Multiple parallel tablespace creates is usually a big
On 5/12/06, Roch Bourbonnais - Performance Engineering
[EMAIL PROTECTED] wrote:
From: Gregory Shaw [EMAIL PROTECTED]
Regarding directio and quickio, is there a way with ZFS to skip the
system buffer cache? I've seen big benefits for using directio when
the data files have been
Roch Bourbonnais - Performance Engineering wrote:
Tao Chen writes:
On 5/12/06, Roch Bourbonnais - Performance Engineering
[EMAIL PROTECTED] wrote:
From: Gregory Shaw [EMAIL PROTECTED]
Regarding directio and quickio, is there a way with ZFS to skip the
system buffer cache?
You could start with the ARC paper, Megiddo/Modha FAST'03
conference. ZFS uses a variation of that. It's an interesting
read.
-r
Franz Haberhauer writes:
Gregory Shaw wrote On 05/11/06 21:15,:
Regarding directio and quickio, is there a way with ZFS to skip the
system buffer cache?
Franz Haberhauer writes:
'ZFS optimizes random writes versus potential sequential reads.'
This remark focused on the allocation policy during writes,
not the readahead that occurs during reads.
Data that are rewritten randomly but in place in a sequential,
contiguos file (like a
Robert Milkowski writes:
Hello Roch,
Friday, May 12, 2006, 2:28:59 PM, you wrote:
RBPE Hi Robert,
RBPE Could you try 35 concurrent dd each issuing 128K I/O ?
RBPE That would be closer to how ZFS would behave.
You mean to UFS?
ok, I did try and I get about 8-9MB/s with
I thought the benefits were from skipping the read-ahead logic.
What was seen prior to the implementation of directio was this:
- System running a high(er) load. It was difficult to see why the
load was higher, as oracle was the primary process(es).
After the implementation, the load on
On Fri, May 12, 2006 at 05:23:53PM +0200, Roch Bourbonnais - Performance
Engineering wrote:
For read it is an interesting concept. Since
Reading into cache
Then copy into user space
then keep data around but never use it
is not optimal.
So 2 issues, there is the cost
Nicolas Williams writes:
On Fri, May 12, 2006 at 05:23:53PM +0200, Roch Bourbonnais - Performance
Engineering wrote:
For read it is an interesting concept. Since
Reading into cache
Then copy into user space
then keep data around but never use it
is not
On Fri, May 12, 2006 at 06:33:00PM +0200, Roch Bourbonnais - Performance
Engineering wrote:
Directio is non-posix anyway and given that people have been
train to inform the system that the cache won't be useful,
that it's a hard problem to detect automatically, let's
avoid the copy and save
The ZFS discuss list is re-delivering old messages.
Have the problems with the archives been fixed?
Nico
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 2006-05-12 at 10:42 -0500, Anton Rang wrote:
Now latency wise, the cost of copy is small compared to the
I/O; right ? So it now turns into an issue of saving some
CPU cycles.
CPU cycles and memory bandwidth (which both can be in short
supply on a database server).
We can
On Fri, May 12, 2006 at 09:59:56AM -0700, Richard Elling wrote:
On Fri, 2006-05-12 at 10:42 -0500, Anton Rang wrote:
Now latency wise, the cost of copy is small compared to the
I/O; right ? So it now turns into an issue of saving some
CPU cycles.
CPU cycles and memory bandwidth
Nicolas Williams wrote:
The ZFS discuss list is re-delivering old messages.
Which message(s) was redelivered?
Have the problems with the archives been fixed?
Which problem specifically?
Derek
Nico
--
Derek Cicero
Program Manager
Solaris Kernel Group, Software Division
On May 12, 2006, at 11:59 AM, Richard Elling wrote:
CPU cycles and memory bandwidth (which both can be in short
supply on a database server).
We can throw hardware at that :-) Imagine a machine with lots
of extra CPU cycles [ ... ]
Yes, I've heard this story before, and I won't believe it
I really like the below idea:
- the ability to defragment a file 'live'.
I can see instances where that could be very useful. For instance,
if you have multiple LUNs (or spindles, whatever) using ZFS, you
could re-optimize large files to spread the chunks across as many
spindles
I'm looking at using ZFS as our main file server FS over here.
I can do the disk layout tuning myself, but what I'm more interested in
is getting thoughts on the amount of RAM that might help performance on
these machines. Assume I've got more than enough network and disk
bandwidth, and the
On Thu, 2006-05-11 at 17:01 -0700, Jeff Bonwick wrote:
plan A. To mirror on iSCSI devices:
keep one server with a set of zfs file systems
with 2 (sub)mirrors each, one of the mirrors use
devices physically on remote site accessed as
iSCSI LUNs.
Looks like CR 6411261 busy intent log runs out of space on small pools.
I found this one. I just bumped up the priority.
Jim
When unpacking the solaris source onto a local disk
on a system running build 39 I got the following
panic:
panic[cpu0]/thread=d2c8ade0:
really out of space
On Fri, May 12, 2006 at 01:49:38PM -0700, Marion Hakanson wrote:
Greetings,
I've seen discussion that tar cpio are ZFS ACL aware; And that
Veritas NetBackup is not. GNU tar is not (at this time); Joerg's star
probably will be Real Soon Now. Feel free to correct me if I'm wrong.
What
21 matches
Mail list logo