Hi Kevin , 

I suppose the quota check is done when the writing node allocates blocks 
to write to. 
mind: the detour via NSD servers is transparent for that layer, GPFS may 
switch between SCSI/SAN paths to a (direct-.attached) block device and the 
NSD service via a separate NSD server, both ways are logically similar for 
the writing node (or should be for your matter).

In short: yes, I think you need to roll out your "quota exceeded" 
call-back to all nodes in the HPC cluster. 

Mit freundlichen Grüßen / Kind regards

Dr. Uwe Falke
IT Specialist
High Performance Computing Services / Integrated Technology Services / 
Data Center Services
IBM Deutschland
Rathausstr. 7
09111 Chemnitz
Phone: +49 371 6978 2165
Mobile: +49 175 575 2877
E-Mail: uwefa...@de.ibm.com
IBM Deutschland Business & Technology Services GmbH / Geschäftsführung: 
Thomas Wolter, Sven Schooß
Sitz der Gesellschaft: Ehningen / Registergericht: Amtsgericht Stuttgart, 
HRB 17122 

From:   "Buterbaugh, Kevin L" <kevin.buterba...@vanderbilt.edu>
To:     gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Date:   04/04/2018 22:51
Subject:        [gpfsug-discuss] Local event
Sent by:        gpfsug-discuss-boun...@spectrumscale.org

Hi All, 

According to the man page for mmaddcallback:

A local
         event triggers a callback only on the node on which the
         event occurred, such as mounting a file system on one of
         the nodes.

We have two GPFS clusters here (well, three if you count our small test 
cluster).  Cluster one has 8 NSD servers and one client, which is used 
only for tape backup ? i.e. no one logs on to any of the nodes in the 
cluster.  Files on it are accessed one of three ways:  1) CNFS mount to 
local computer, 2) SAMBA mount to local computer, 3) GPFS multi-cluster 
remote mount to cluster two.  On cluster one there is a user callback for 
softQuotaExceeded that e-mails the user ? and that we know works.

Cluster two has two local GPFS filesystems and over 600 clients natively 
mounting those filesystems (it?s our HPC cluster).  I?m trying to 
implement a similar callback for softQuotaExceeded events on cluster two 
as well.  I?ve tested the callback by manually running the (Python) script 
and passing it in the parameters I want and it works - I get the e-mail. 
Then I added it via mmcallback, but only on the GPFS servers.

I did that because I thought that since callbacks work on cluster one with 
no local access to the GPFS servers that ?local? must mean ?when an NSD 
server does a write that puts the user over quota?.  However, on cluster 
two the callback is not being triggered.  Does this mean that I actually 
need to install the callback on every node in cluster two?  If so, then 
how / why are callbacks working on cluster one?



Kevin Buterbaugh - Senior System Administrator
Vanderbilt University - Advanced Computing Center for Research and 
kevin.buterba...@vanderbilt.edu - (615)875-9633

gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org

gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org

Reply via email to