We’re using the Elasticsearch plugin and it stores the entire job script which 
is easily searchable from Kibana. You of course need to have Elasticsearch and 
Kibana set up first, but once you do it’s trivial to activate in SLURM:

JobCompType             = jobcomp/elasticsearch
JobCompLoc              = http://my.local.elasticsearch.server:9200 
<http://my.local.elasticsearch.server:9200/>

One downside is that the plugin was contributed and is not officially supported 
by SchedMD. There are a few features of the plugin that we don’t love but have 
not quite had enough spare cycles to implement the changes. Specifically:

1. The plugin stores all job records in a single huge index. For performance 
reasons, we’d like to store job records in a separate index for each week or 
month, much like LogStash does.
2. The node list for each job is stored as a range (e.g. node[001-003]), rather 
than an individual list (e.g. node001, node002, node003), which makes it 
difficult to run searches/queries for a specific node.

Will 


> On Mar 22, 2017, at 11:31 AM, E.M. Dragowsky <[email protected]> wrote:
> 
> Greetings,
> 
> Is there a recommended tool to supplement slurm, so as to parse from the 
> script files the names of executables that run under each jobid? Or to 
> otherwise keep a record of the user:executable usage through the scheduler?
> 
> From a review of accounting information, I do not see that this type of 
> information is brought to the database.
> 
> Thanks in advance,
> ~ E.m
> 
> -- 
> ----------------------------------
> E.M. Dragowsky, Ph.D.
> Research Computing -- UTech
> Case Western Reserve University
> (216) 368-0082 <tel:(216)%20368-0082>

Reply via email to