The optimization of blocksize for a dataset/volume should only be used when the 
complete workload consists of random i/o with the same size e.g. 8k for Oracle 
or 16k for PostgreSql/MySql.
Both conditions should be fulfilled: random and fixed blocksize.
The clustersize of NTFS should be choosen depending of the application set 
running on this Windows system. A relation between clustersize (Windows) and 
recordsize (ZFS) exists only in the former case: random i/o with fixed 
blocksize.
In all other cases the default blocksize of 128k should be used. 

I can't see any reason for limiting arc size when the system is used as a 
storage box.
You can set the txg synctime from 5 seconds to 2 seconds with:
echo zfs_txg_synctime_ms/W0t2000 | mdb -kw 

This should eliminate the stalling of i/o in this case
tank 2.39T 33.9T 0 201 0 494K
tank 2.39T 33.9T 0 0 0 0
tank 2.39T 33.9T 0 0 0 0
tank 2.39T 33.9T 0 44.2K 0 350M

If your (IMHO pathological) tests are representative for your expected 
workloads you should the spent the money for a separate slog device on a ssd.

Andreas
-- 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to