Hi All,
I have a two-site replicate GPFS cluster running GPFS v3.5.0-26. We have recently run into a performane problem while exporting an SMB mount to one of our client labs. Specifically, this lab is attempting to run a MatLab SPM job in the SMB share and seeing sharply degraded performance versus running it over NFS to their own NFS service. The job does time-slice correction on MRI image volumes that result in roughly 15,000 file creates, plus at lease one read and at least one write to each file. Here is a list that briefly describes the time-to-completion for this job, as run under various conditions: 1) Backed by their local fileserver, running over NFS - 5 min 2) Backed by our GPFS, running over SMB - 30 min 3) Backed by our GPFS, running over NFS - 20 min 4) Backed by local disk on our exporting protocol node, over SMB - 6 min 5) Backed by local disk on our exporting protocol node, over NFS - 6 min 6) Back by GPFS, running over GPFS native client on our supercomputer - 2 min >From this list, it seems that the performance problems arise when combining >either SMB or NFS with the GPFS backend. It is our conclusion that neither SMB >nor NFS per se create the problem, exporting a local disk share over either of >these protocols yields decent performance. Do you have any insight as to why the combination of the GPFS back-end with either NFS or SMB yields such anemic performance? Can you offer any tuning recommendations that may improve the performance when running over SMB to the GPFS back-end (our preferred method of deployment)? Thank you so much for your help as always! Stewart Howard Indiana University
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
