Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-19 Thread Glinty McFrikknuts
Thanks for the suggestions. I re-created the pool, set the record size to 8K, re-created the file and increased the I/O size from the application. It's nearly all writes now. This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-15 Thread Neil Perrin
Nathan Kroenert wrote: And something I was told only recently - It makes a difference if you created the file *before* you set the recordsize property. If you created them after, then no worries, but if I understand correctly, if the *file* was created with 128K recordsize, then it'll

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-15 Thread Richard Elling
Nathan Kroenert wrote: And something I was told only recently - It makes a difference if you created the file *before* you set the recordsize property. Actually, it has always been true for RAID-0, RAID-5, RAID-6. If your I/O strides over two sets then you end up doing more I/O, perhaps twice

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-15 Thread Mattias Pantzare
If you created them after, then no worries, but if I understand correctly, if the *file* was created with 128K recordsize, then it'll keep that forever... Files have nothing to do with it. The recordsize is a file system parameter. It gets a little more complicated because the

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-15 Thread Nathan Kroenert
What about new blocks written to an existing file? Perhaps we could make that clearer in the manpage too... hm. Mattias Pantzare wrote: If you created them after, then no worries, but if I understand correctly, if the *file* was created with 128K recordsize, then it'll keep that

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-15 Thread Nathan Kroenert
Hey, Richard - I'm confused now. My understanding was that any files created after the recordsize was set would use that as the new maximum recordsize, but files already created would continue to use the old recordsize. Though I'm now a little hazy on what will happen when the new existing

[zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-14 Thread Glinty McFrikknuts
I'm running on s10s_u4wos_12b and doing the following test. Create a pool, striped across 4 physical disks from a storage array. Write a 100GB file to the filesystem (dd from /dev/zero out to the file). Run I/O against that file, doing 100% random writes with an 8K block size. zpool iostat shows

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-14 Thread Anton B. Rang
Create a pool [ ... ] Write a 100GB file to the filesystem [ ... ] Run I/O against that file, doing 100% random writes with an 8K block size. Did you set the record size of the filesystem to 8K? If not, each 8K write will first read 128K, then write 128K. Anton This message posted from

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-14 Thread Richard Elling
Anton B. Rang wrote: Create a pool [ ... ] Write a 100GB file to the filesystem [ ... ] Run I/O against that file, doing 100% random writes with an 8K block size. Did you set the record size of the filesystem to 8K? If not, each 8K write will first read 128K, then write 128K.

Re: [zfs-discuss] 100% random writes coming out as 50/50 reads/writes

2008-02-14 Thread Nathan Kroenert
And something I was told only recently - It makes a difference if you created the file *before* you set the recordsize property. If you created them after, then no worries, but if I understand correctly, if the *file* was created with 128K recordsize, then it'll keep that forever... Assuming