On Nov 23, 2006  14:37 -0700, Lee Ward wrote:
> On Thu, 2006-11-23 at 12:46 -0700, Peter Braam wrote:
> > > On Wed, 2006-11-22 at 22:07 +0200, Oleg Drokin wrote:
> > > We use 2 OST per OSS in order to actively use both channels of the HBA
> > > -- Staying away channel from bonding.
> >
> > So that limits (currently) your bandwidth to 50% of what is available.   
> > Making an OST volume through LVM can give you full bandwidth, without 
> > channel bonding.  I think we have just established that using LVM for 
> > RAID0 will not have an impact on performance.
> 
> Huh? in the aggregate, all the performance is there. The graphs reflect
> that. No way could we get 40 GB/s using only half what the attached disk
> is capable of.

I think to clarify the previous comments:
- for FPP jobs, files are striped over 1 OST, but all 320 OSTs (hence 320
  DDN tiers, 320 FC controllers) are in use because there are multiple files
- for SSF jobs, the one file is striped over at most 160 OSTs (this is a
  current Lustre limit for a single file) so at most 160 DDN tiers, 160 FC
  controllers are in use

This difference is why Oleg previously mentioned the "explained 2x difference"
in the tests.

For an apples-apples comparison, it would be possible to deactivate 160 OSTs
(one per OSS) on MDS via "for N in ... ;do lctl --device N deactivate;done"
and then run the SSF and FPP jobs again.  This will limit the FPP jobs to
160 OSTs (like the SSF).  It might also be useful to disable 161 OSTs (leave
159 active) to avoid the aliasing in the SSF case, or alternately have clients
each write 7MB chunk sizes or something.

Cheers, Andreas
--
Andreas Dilger
Principal Software Engineer
Cluster File Systems, Inc.

_______________________________________________
Lustre-devel mailing list
[email protected]
https://mail.clusterfs.com/mailman/listinfo/lustre-devel

Reply via email to