Hi Brock,

Yeah, that's likely to be an issue if each host has more than one path ...

What about using HA to force one path to be inactive at the device level? I
know QLogic FC cards support this functionality, although it requires
changing the options used by the driver kernel module ... mind you,
comparing that to a solution using the multipath daemon, that's six of one,
half a dozen of the other, I'd think.

Klaus

On 6/5/08 5:09 PM, "Brock Palen" <[EMAIL PROTECTED]>did etch on stone
tablets:

> This would be for the MDS/MGS only, but thats good to know.  Problem
> is our two MDS servers (active/passive) will have two connections
> each to the same lun, so there could be issues.
> 
> Brock Palen
> www.umich.edu/~brockp
> Center for Advanced Computing
> [EMAIL PROTECTED]
> (734)936-1985
> 
> 
> 
> On Jun 5, 2008, at 7:52 PM, Klaus Steden wrote:
>> 
>> Hi Brock,
>> 
>> I've got a Sun StorageTek array hooked up to one of our clusters,
>> and I'm
>> using labels instead of multi-pathing. We've got it hooked up in a
>> similar
>> fashion as Stuart; it's a bit "slow and sloppy" when initializing,
>> but it
>> works well enough and there are no problems once OSTs are online.
>> 
>> Klaus
>> 
>> On 6/5/08 3:57 PM, "Brock Palen" <[EMAIL PROTECTED]>did etch on stone
>> tablets:
>> 
>>> Our new lustre hardware arrived from sun today.  Looking at the duel
>>> MDS and FC disk array for it.  We will need multipath.
>>> Has anyone ever used multipath with lustre?  Is there any issues?  If
>>> we set up regular multipath via LVM lustre won't care as far as I can
>>> tell and browsing archives.
>>> 
>>> What about multipath without LVM?  Our StorageTek array has dual
>>> controllers with dual ports going to dual port FC cards in the
>>> MDS's.  Each MDS has a connection to both controllers so we will need
>>> multipath to get any advantage to this.
>>> 
>>> Comments?
>>> 
>>> 
>>> Brock Palen
>>> www.umich.edu/~brockp
>>> Center for Advanced Computing
>>> [EMAIL PROTECTED]
>>> (734)936-1985
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Lustre-discuss mailing list
>>> [email protected]
>>> http://lists.lustre.org/mailman/listinfo/lustre-discuss
>> 
>> 
>> 
> 

_______________________________________________
Lustre-discuss mailing list
[email protected]
http://lists.lustre.org/mailman/listinfo/lustre-discuss

Reply via email to