mmlspdisk all --not-ok does not indicate any failed hard drives. On Wed, Jul 10, 2019 at 7:22 AM Buterbaugh, Kevin L < [email protected]> wrote:
> Hi Damir, > > Have you checked to see whether gssio4 might have a failing internal HD / > SSD? Thanks… > > Kevin > > On Jul 10, 2019, at 7:16 AM, Damir Krstic <[email protected]> wrote: > > Over last couple of days our reads and writes on our compute cluster are > experiencing real slow reads and writes on one of the filesystems in the > cluster. We are talking with IBM and have Sev. 1 ticket open, but I figured > to ask here about the warning message we are seeing in GPFS logs. > > The cluster is configured as following: > > 4 IO servers in the main gpfs cluster > 700+ compute nodes in the gpfs cluster > > home filesystem is slow but projects filesystem seems to be fast. Not many > waiters on the IO servers (almost none) but a lot of waiters on the remote > cluster. > > The message that is giving us a pause is the following: > Jul 10 07:05:31 gssio4 mmfs: [N] Writing into file > /var/mmfs/gen/LastLeaseRequestSent took 10.5 seconds > > Why is taking so long to write to to the local file? > > Thanks, > Damir > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > > https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cede76a6c2bd743c836b708d705307a86%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C636983578133164831&sdata=ATtqXDDChaZTouZZRkjf%2F79pIK%2Fc1DAwq6KUU%2FKYca4%3D&reserved=0 > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
