Hello qihua,
Saturday, December 27, 2008, 7:04:06 AM, you wrote:
After we changed the recordsize to 8k, We first used dd to move the data files around. We could see the time recovering a archive log dropped from 40mins to 4 mins. But when using iostat to check, the read io is about 8K
HI Qihua, there are many reasons why the recordsize does not govern
the I/O size directly. Metadata I/O is one, ZFS I/O scheduler
aggregation is another.
The application behavior might be a third.
Make sure to create the DB files after modifying the ZFS property.
-r
Le 26 déc. 08 à 11:49,
After I changed the recordsize to 8k, seems the read/write size is not
always 8k when using zpool iostat to check. So ZFS doesn't obey the
recordsize strictly?
UC4-zuc4arch$ zfs get recordsize
NAME PROPERTY
VALUESOURCE
On Fri, 26 Dec 2008 18:49:41 +0800, qihua wu
staywith...@gmail.com wrote:
After I changed the recordsize to 8k, seems the read/write size is not
always 8k when using zpool iostat to check. So ZFS doesn't obey the
recordsize strictly?
Did you recreate the database? Existing files keep the
After we changed the recordsize to 8k, We first used dd to move the data
files around. We could see the time recovering a archive log dropped from
40mins to 4 mins. But when using iostat to check, the read io is about 8K
for each read, the write IO is still 128k for each write. Then we used cp
to
Hi, All,
We have an oracle standby running on zfs and the database recovers very very
slow. The problem is the IO performance is very bad. I find the recordsize
of the ZFS is 128K, and the oracle block size is 8K. My
My question is:
When oracle tries to write a 8k block, will zfs read in 128K