Hi,

How sure are you that it is cpu scheduling that is your problem?

Are you using IB or Ethernet?

I have seen problems that look like yours in the past with single-network 
Ethernet setups. 

Regards,

Vic

Sent from my iPhone

> On 2 Mar 2016, at 20:54, Matt Weil <[email protected]> wrote:
> 
> Can you share anything more?
> We are trying all system related items on cpu0 GPFS is on cpu1 and the
> rest are used for the lsf scheduler.  With that setup we still see
> evictions.
> 
> Thanks
> Matt
> 
>> On 3/2/16 1:49 PM, Bryan Banister wrote:
>> We do use cgroups to isolate user applications into a separate cgroup which 
>> provides some headroom of CPU and memory resources for the rest of the 
>> system services including GPFS and its required components such SSH, etc.
>> -B
>> 
>> -----Original Message-----
>> From: [email protected] 
>> [mailto:[email protected]] On Behalf Of Matt Weil
>> Sent: Wednesday, March 02, 2016 1:47 PM
>> To: gpfsug main discussion list
>> Subject: [gpfsug-discuss] cpu shielding
>> 
>> All,
>> 
>> We are seeing issues on our GPFS clients where mmfsd is not able to respond 
>> in time to renew its lease. Once that happens the file system is unmounted.  
>> We are experimenting with c groups to tie mmfsd and others to specified 
>> cpu's.  Any recommendations out there on how to shield GPFS from other 
>> process?
>> 
>> Our system design has all PCI going through the first socket and that seems 
>> to be some contention there as the RAID controller with SSD's and nics are 
>> on that same bus.
>> 
>> Thanks
>> 
>> Matt
>> 
>> 
>> ____
>> This email message is a private communication. The information transmitted, 
>> including attachments, is intended only for the person or entity to which it 
>> is addressed and may contain confidential, privileged, and/or proprietary 
>> material. Any review, duplication, retransmission, distribution, or other 
>> use of, or taking of any action in reliance upon, this information by 
>> persons or entities other than the intended recipient is unauthorized by the 
>> sender and is prohibited. If you have received this message in error, please 
>> contact the sender immediately by return email and delete the original 
>> message from all computer systems. Thank you.
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>> 
>> ________________________________
>> 
>> Note: This email is for the confidential use of the named addressee(s) only 
>> and may contain proprietary, confidential or privileged information. If you 
>> are not the intended recipient, you are hereby notified that any review, 
>> dissemination or copying of this email is strictly prohibited, and to please 
>> notify the sender immediately and destroy this email and any attachments. 
>> Email transmission cannot be guaranteed to be secure or error-free. The 
>> Company, therefore, does not make any guarantees as to the completeness or 
>> accuracy of this email or any attachments. This email is for informational 
>> purposes only and does not constitute a recommendation, offer, request or 
>> solicitation of any kind to buy, sell, subscribe, redeem or perform any type 
>> of transaction of a financial product.
>> _______________________________________________
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 
> 
> ____
> This email message is a private communication. The information transmitted, 
> including attachments, is intended only for the person or entity to which it 
> is addressed and may contain confidential, privileged, and/or proprietary 
> material. Any review, duplication, retransmission, distribution, or other use 
> of, or taking of any action in reliance upon, this information by persons or 
> entities other than the intended recipient is unauthorized by the sender and 
> is prohibited. If you have received this message in error, please contact the 
> sender immediately by return email and delete the original message from all 
> computer systems. Thank you.
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to