Hi all;
I am trying to run slurm with multiple slurmd daemons on a single node. I
built slurm with "--enable-multiple-slurmd" parameter, I modified
slurm.conf:
# COMPUTE NODES
NodeName=node1 NodeHostname=claudioslurm Port=17001 CPUs=2 State=UNKNOWN
NodeName=node2 NodeHostname=claudioslurm Port=17
We use something that was originally home-grown in house but is now open
source (https://github.com/hpcsi/losf) after having moved off of Rocks
long ago. It's based on Cobbler but has recently be updated to support
both CentOS and SUSE. Either way, management of RPMs is at its core. We
have had m
We use SaltStack and Cobbler with centos 7 we was test katello but is
uneatable with salt
I install node in less than 20minuts, and I manage the
configuration/partitions scheduling and accounting with salt
On 03/15/2016 01:40 PM, Bjørn-Helge Mevik wrote:
I apologize for the slightly off-topi
Not specifically a cluster provisioning system, but we use Foreman + Puppet
to provision our systems, which are stateful. The deploy time is kind of
lengthy, ~45 minutes per node, but the results are consistent and changes
can be made to systems after deployment via Puppet. Puppet operates in
mas
I am currently setting up a test cluster and shall be looking at
- Warewulf
If you like Warewulf, you could look at OpenHPC, which uses Warewulf for the
provisioning.
The slurm version on my OpenHPC server is 15.08.6, and this came from the
OpenHPC repositories.
##
Hi Bjørn-Helge,
We also have a good experience with xCAT and did lots of cluster configuration
across Europe. Please let us know if you need help or assistant. We also have a
intensive xCAT training beside other HPC services.
Regards,
Hossein
transtec ag
___
+1 for xCAT as well. We’ve enjoyed it for the most part, although we have found
a few bugs, but the xCAT team good to work with just like the Slurm team. I
will note, like Daniel, we’re looking for some better integration with
configuration management software stacks as well as the number of sup
+1 for xcat. We use xCAT to manage 3k+ nodes and while it can be a little
complex it works very well. I've used xcat on 3 different systems now and I
quite like it.
Sent from my iPhone
> On Mar 15, 2016, at 8:39 AM, Bjørn-Helge Mevik wrote:
>
>
> I apologize for the slightly off-topic subj
Hello All,I've now narrowed it down to scontrol update commands. Test 17.18
performs an scontrol update of the StartTime which causes the controller to
crash. I went back and tested with slurm 14.11.9 and this does not
happen.Kelly On 03/14/16, keltroy...@verizon.net wrote: Hello All,I am
cur
body p { margin-bottom: 0cm; margin-top: 0pt; }
Another vote for xCAT here - been using it for ~3 years now, on
installations ranging from 8 to 1+k nodes.
Once you get to know xCAT it's quite easy to manage, although
familiarity with perl will help in any troubleshooting or
customization (
> "BH" == Bjørn-Helge Mevik writes:
BH> I apologize for the slightly off-topic subject, but I could not
BH> think of a better forum to ask. If you know of a more proper
BH> place to ask this, I'd be happy to know about it.
BH> We are currently in the design fase for a new c
Hi Bjørn-Helge,
Bjørn-Helge Mevik
writes:
> I apologize for the slightly off-topic subject, but I could not think of
> a better forum to ask. If you know of a more proper place to ask this,
> I'd be happy to know about it.
>
> We are currently in the design fase for a new cluster that is going
Bjorn
You should be definitely looking at Bright cluster Manager.
I set up a Bright cluster last week with CentOS 7.2 and slurm.
Bright works right our of the box with slurm, and it is set up automatically as
you provision the nodes.
Also have the powersaving scripts etc all set up.
Please ping
> On Mar 15, 2016, at 08:44, Chris Samuel wrote:
>
>> On Tue, 15 Mar 2016 05:40:29 AM Bjørn-Helge Mevik wrote:
>>
>> I apologize for the slightly off-topic subject, but I could not think of
>> a better forum to ask. If you know of a more proper place to ask this,
>> I'd be happy to know about
On Tue, 15 Mar 2016 05:40:29 AM Bjørn-Helge Mevik wrote:
> I apologize for the slightly off-topic subject, but I could not think of
> a better forum to ask. If you know of a more proper place to ask this,
> I'd be happy to know about it.
http://beowulf.org/
There's actually a very recent threa
I apologize for the slightly off-topic subject, but I could not think of
a better forum to ask. If you know of a more proper place to ask this,
I'd be happy to know about it.
We are currently in the design fase for a new cluster that is going to
be set up next year. We have so far used Rocks (o
16 matches
Mail list logo