Hello.
In which cases slurm will attempt to relaunch a task and assign it to a new
node if current node fails for some reasons? Sometimes, we see, that slurm
tries to start a task on each node in its partition. We can't reproduce it.
We tried to manually break one node, however slurm marks this task as
failed and doesn't relaunch task on other nodes.

Reply via email to