Group, et al,
I don't understand that if the problem is systemic based on
the number of continual dirty pages and stress to clean
those pages, then why .
If the problem is FS independent, because any number of
different installed FSs can equally
Heya Roch,
On 10/17/06, Roch [EMAIL PROTECTED] wrote:
-snip-
Oracle will typically create it's files with 128K writes
not recordsize ones.
Darn, that makes things difficult doesn't it? :(
Come to think of it, maybe we're approaching things from the wrong
perspective. Databases such as Oracle
On October 17, 2006 2:02:19 AM -0700 Erblichs [EMAIL PROTECTED]
wrote:
Group, et al,
I don't understand that if the problem is systemic based on
the number of continual dirty pages and stress to clean
those pages, then why .
If the problem is FS independent,
Matthew Ahrens writes:
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
Here is one relatively straightforward way you could implement this.
You can't (currently) change the recordsize once there
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
Here is one relatively straightforward way you could implement this.
You can't (currently) change the recordsize once there are multiple
blocks in the file.
Matthew Ahrens wrote:
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
It would be really great to automatically select the proper recordsize
for each file! How do you suggest doing so?
Maybe I've been
On Fri, Oct 13, 2006 at 09:22:53PM -0700, Erblichs wrote:
For extremely large files (25 to 100GBs), that are accessed
sequentially for both read write, I would expect 64k or 128k.
Lager files accessed sequentially don't need any special heuristic for
record size determination:
Nico,
Yes, I agree.
But also single random large single read and writes would also
benefit from a large record size. So, I didn't try make that
distinction. However, I guess that the best random large
reads writes would fall within single filesystem
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
--
Regards,
Jeremy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
It would be really great to automatically select the proper recordsize
for each file! How do you suggest doing so?
--matt
On Fri, Oct 13, 2006 at 08:30:27AM -0700, Matthew Ahrens wrote:
Jeremy Teo wrote:
Would it be worthwhile to implement heuristics to auto-tune
'recordsize', or would that not be worth the effort?
It would be really great to automatically select the proper recordsize
for each file! How do
Group,
I am not sure I agree with the 8k size.
Since recordsize is based on the size of filesystem blocks
for large files, my first consideration is what will be
the max size of the file object.
For extremely large files (25 to 100GBs), that are accessed
12 matches
Mail list logo