On Tue, May 01, 2012 at 03:21:05AM -0700, Gary Driggs wrote:
> On May 1, 2012, at 1:41 AM, Ray Van Dolson wrote:
> > Throughput:
> > iozone -m -t 8 -T -r 128k -o -s 36G -R -b bigfile.xls
> > IOPS:
> > iozone -O -i 0 -i 1 -i 2 -e -+n -r 128K -s 288G > iops.txt
> Do you expect to be reading or writing 36 or 288Gb files very often on
> this array? The largest file size I've used in my still lengthy
> benchmarks was 16Gb. If you use the sizes you've proposed, it could
> take several days or weeks to complete. Try a web search for "iozone
> examples" if you want more details on the command switches.
The problem is this box has 144GB of memory. If I go with a 16GB file
size (which I did), then memory and caching influences the results
pretty severely (I get around 3GB/sec for writes!).
Obviously, I could yank RAM for purposes of benchmarking..... :)
zfs-discuss mailing list