Hello Maiken,

> I hacked my way through it changing the 
> .elasticluster/storage/$clustername.yml file with the correct image_id for 
> the nodes,  but I am sure there is a better way!

Unfortunately no :-(

The way to do it without hacking the `.elasticluster/storage/*.yml`
files would be to:

* add a second type of compute nodes (call it `compute2`) into the
`[setup/*]` section of the cluster, which is identical to the existing
`compute` nodes:

    [setup/slurm]
    frontend_groups=slurm_master
    compute_groups=slurm_worker
   compute2_groups=slurm_worker

* add a [cluster/*/compute2] section with the new image ID:

    [cluster/slurm]
   # ...everything stays the same...

    [cluster/slurm/compute2]
    image_id=new-image-ID

* now add nodes of class `compute2` to the cluster:

    elasticluster resize -a compute2:1 slurm

I should eventually add a `elasticluster freshen` command that merges
changes in the configuration file into an existing cluster state file,
but it's a long way from where we stand now.

Ciao,
R

-- 
You received this message because you are subscribed to the Google Groups 
"elasticluster" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticluster/CAJGE3zWr%3DZZ3aV2kt3OP%2BRRMLy4SiNcuJi2eHMb57g-FHMjLcw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to