Mark Post writes:

> Well, there seems to be some disagreement here.  Gordon Wolfe says he
sees a
> decrease in performance when doing this.  You say you get good results.
> Would either or both of you be willing to publish your results and
> methodology?

Unfortunately, I'm not in a position to post performance results nor make
performance claims. However, I have seen greater rates on a zSeries/Shark
for a single stream under LVM than for the same size file to a single
volume:

Trying to minimize the effect of memory cache size,

1) I write many files so the aggregate data size moved is much greater than
both the memory size and the DASD cache size. I fsync, fdatasync after each
chunk of 1MB is written.
2) I reread the data after all the writes complete so that none of the
files being read reside in memory or cache
3) I've tried file sizes from 1 MB to 2047 MB.
4) As a variation, I've also tried moving a aggregate data size greater
than memory but less than DASD cache.

Using this approach, I can see the effects of both device speed and
multiple channels for on a single file on DASD, and a single file on LVM.
This can be extended to looking at multiple concurrent files.

I'm been using a tool called Reba, based on Bonnie, but concentrates on
block writes and read and it allows the read to occur much later after the
write to focus on the effect to DASD. If you want a copy, I can send you
one.

Regards, Jim
Linux S/390-zSeries Support, SEEL, IBM Silicon Valley Labs
t/l 543-4021, 408-463-4021, [EMAIL PROTECTED]
*** Grace Happens ***

Reply via email to