Re: [gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Kumaran Rajaram
g pitWorkerThreadsPerNode=3 -N <8_NSD_Servers>" such that (8 x 3) is less than 31. Regards, -Kums From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org> Date: 05/04/2017 12:57

Re: [gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Buterbaugh, Kevin L
rNode) Regards, -Kums From:"Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu>> To:gpfsug main discussion list <gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>> Date:05/04/2017

Re: [gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Kumaran Rajaram
to take effect on the participating_nodes (verify with mmfsadm dump config | grep pitWorkerThreadsPerNode) Regards, -Kums From: "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org> Date

Re: [gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Olaf Weiser
:        "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>To:        gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>Date:        05/04/2017 05:44 PMSubject:        Re: [gpfsug-discuss] Well, this is the pits...Sent by:        gpfsug-discuss-boun...@spectrumscale.orgHi

Re: [gpfsug-discuss] Well, this is the pits...

2017-05-04 Thread Olaf Weiser
no.. it is just in the code, because we have to avoid to run out of mutexs / blockreduce the number of nodes -N down to 4  (2nodes is even more safer) ... is the easiest way to solve it for nowI've been told the real root cause will be fixed in one of the next ptfs .. within this year .. this