Re: [zfs-discuss] Self-tuning recordsize

2006-10-17 Thread Erblichs
Group, et al, I don't understand that if the problem is systemic based on the number of continual dirty pages and stress to clean those pages, then why . If the problem is FS independent, because any number of different installed FSs can equally

Re: [zfs-discuss] Self-tuning recordsize

2006-10-17 Thread Jeremy Teo
Heya Roch, On 10/17/06, Roch [EMAIL PROTECTED] wrote: -snip- Oracle will typically create it's files with 128K writes not recordsize ones. Darn, that makes things difficult doesn't it? :( Come to think of it, maybe we're approaching things from the wrong perspective. Databases such as Oracle

Re: [zfs-discuss] Self-tuning recordsize

2006-10-17 Thread Frank Cusack
On October 17, 2006 2:02:19 AM -0700 Erblichs [EMAIL PROTECTED] wrote: Group, et al, I don't understand that if the problem is systemic based on the number of continual dirty pages and stress to clean those pages, then why . If the problem is FS independent,

Re: [zfs-discuss] Self-tuning recordsize

2006-10-16 Thread Roch
Matthew Ahrens writes: Jeremy Teo wrote: Would it be worthwhile to implement heuristics to auto-tune 'recordsize', or would that not be worth the effort? Here is one relatively straightforward way you could implement this. You can't (currently) change the recordsize once there

Re: [zfs-discuss] Self-tuning recordsize

2006-10-15 Thread Matthew Ahrens
Jeremy Teo wrote: Would it be worthwhile to implement heuristics to auto-tune 'recordsize', or would that not be worth the effort? Here is one relatively straightforward way you could implement this. You can't (currently) change the recordsize once there are multiple blocks in the file.

Re: [zfs-discuss] Self-tuning recordsize

2006-10-15 Thread Torrey McMahon
Matthew Ahrens wrote: Jeremy Teo wrote: Would it be worthwhile to implement heuristics to auto-tune 'recordsize', or would that not be worth the effort? It would be really great to automatically select the proper recordsize for each file! How do you suggest doing so? Maybe I've been

Re: [zfs-discuss] Self-tuning recordsize

2006-10-14 Thread Nicolas Williams
On Fri, Oct 13, 2006 at 09:22:53PM -0700, Erblichs wrote: For extremely large files (25 to 100GBs), that are accessed sequentially for both read write, I would expect 64k or 128k. Lager files accessed sequentially don't need any special heuristic for record size determination:

Re: [zfs-discuss] Self-tuning recordsize

2006-10-14 Thread Erblichs
Nico, Yes, I agree. But also single random large single read and writes would also benefit from a large record size. So, I didn't try make that distinction. However, I guess that the best random large reads writes would fall within single filesystem

[zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Jeremy Teo
Would it be worthwhile to implement heuristics to auto-tune 'recordsize', or would that not be worth the effort? -- Regards, Jeremy ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Matthew Ahrens
Jeremy Teo wrote: Would it be worthwhile to implement heuristics to auto-tune 'recordsize', or would that not be worth the effort? It would be really great to automatically select the proper recordsize for each file! How do you suggest doing so? --matt

Re: [zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Nicolas Williams
On Fri, Oct 13, 2006 at 08:30:27AM -0700, Matthew Ahrens wrote: Jeremy Teo wrote: Would it be worthwhile to implement heuristics to auto-tune 'recordsize', or would that not be worth the effort? It would be really great to automatically select the proper recordsize for each file! How do

Re: [zfs-discuss] Self-tuning recordsize

2006-10-13 Thread Erblichs
Group, I am not sure I agree with the 8k size. Since recordsize is based on the size of filesystem blocks for large files, my first consideration is what will be the max size of the file object. For extremely large files (25 to 100GBs), that are accessed