I haven't tested with super ancient versions of Slurm but I know we have uploaded past versions before so we could scrape the data for XDMod.  So as far as I'm aware there is no version limitation, but your mileage may vary with very old versions of Slurm.  To make sure I would probably ping SchedMD as to any limitations they are aware of.  Usually they are pretty good about being comprehensive in their docs so they would have probably mentioned it if there was one.

-Paul Edmon-

On 12/13/2021 5:07 AM, Loris Bennett wrote:
Hi Paul,

Am I right in assuming that there are going to be some limitations to
loading archived data w.r.t. version of slurmdbd used to create the
archive and that used to read it?

Cheers,

Loris

Paul Edmon <ped...@cfa.harvard.edu> writes:

Files generated by the slurmdbd archive are read back into the live database by 
sacctmgr.  See:

archive load

Load in to the database previously archived data. The archive file will not be 
loaded if the records already exist in the database - therefore, trying to load 
an archive file more than once will result in an error. When this data is again 
archived and
purged from the database, if the old archive file is still in the directory 
ArchiveDir, a new archive file will be created (see ArchiveDir in the 
slurmdbd.conf man page), so the old file will not be overwritten and these 
files will have duplicate records.

File=
  File to load into database. The specified file must exist on the slurmdbd 
host, which is not necessarily the machine running the command.
Insert=
  SQL to insert directly into the database. This should be used very cautiously 
since this is writing your sql into the database.

So you could set up a full mirror and then read the old archives into that.  
You just want to make sure that mirror has archiving/purging turned off so it 
won't rearchive the data you restored.

-Paul Edmon-

On 12/10/2021 1:28 PM, Ransom, Geoffrey M. wrote:

  Hello

     Our slurmdbd database is getting rather large and affecting performance, 
but we want to keep usage data around for a few years for metric purposes in 
order to figure out how our users work. I read a suggestion to have a backup DB
  which has all the usage data synced to it for metric purposes and a main 
slurmdbd setup for the cluster to use that cleans out old data based on your 
user working needs.

  Is there any documentation suggesting how to set up a second slurmdbd server 
that will receive a copy of all the main slurmdbd entries without purging so we 
can start purging on the in use slurmdbd service to keep short term
  performance snappy? Presumably the upgrade process will be complicated by 
this as well since we have to keep the archive slurmdbd setup in sync with the 
cluster slurmdbd.

  Thanks.

  *EDIT before hitting send*   I was re-reading the slurmdbd.conf man page and 
just saw the Archive* options and this sounds like it would work to implement 
something like this.

  Are archive files readable by sacct and sreport, or easily manually parseable?

  I am going to turn these on in my test cluster, but hearing about other 
peoples experiences with this would probably be helpful.


Reply via email to