:)
So my situation is that all the compute nodes in my cluster must be
removed, and new ones must be added.
But I think I will just destroy the cluster and start with a new one. And
just replace the frontend with the old frontend machine ones the new
cluster is set up.
Seems a bit easier
Hi again,
so I discovered that it does currently not work to remove all compute nodes
with the resize command in elasticluster.
Since there are no more compute nodes (or slurm_workers) left when I have
issued the resize command removing all the slurm workers, I get this:
ASK [nfs-server :
Hello Maiken,
> I hacked my way through it changing the
> .elasticluster/storage/$clustername.yml file with the correct image_id for
> the nodes, but I am sure there is a better way!
Unfortunately no :-(
The way to do it without hacking the `.elasticluster/storage/*.yml`
files would be to:
Hello Maiken,
looks like a syntax error in the playbooks, but I won't be able to
check until end of the week.
Can you please try with Ansible 2.7 and see if the error goes away?
Ciao,
R
--
You received this message because you are subscribed to the Google Groups
"elasticluster" group.
To
Hi,
I am expanding my cluster, and need to use a new image_id.
However, how do I get elasticluster to read the new values in my
configuration?
I have added the new image_id in .elasticluster/config
I deactivate the virtualenv and activate it again (not sure if that is
necessary).
But
Hi again,
I am using ansible version 2.8 with elasticluster installed from master of
20.05.2019
elasticluster version 1.3.dev13
Trying
elasticluster resize -r 20:compute
2019-05-20 20:17:26 elasticluster-final.novalocal gc3.elasticluster[13605]
WARNING CryptographyDeprecationWarning:
I never got that far, but now at last I am ready to give it a try. I will
tell you how it went.
On Tuesday, May 7, 2019 at 10:53:22 PM UTC+2, Maiken Pedersen wrote:
>
> Hi,
> Very nice, thank you very much. I will give it a try tomorrow!
> Maiken
>
> > On 7 May 2019, at 21:50, Riccardo Murri
Issue was surpassed by removing slurm-sjstat, slurm-sjobexit,
slurm-contribs on the frontend and adding ignore_errors on the task.
A bit of a hack, but well, no need to spend more time on it :)
On Tuesday, May 7, 2019 at 9:15:31 PM UTC+2, Riccardo Murri wrote:
>
> Hello Maiken,
>
> > The task