Thanks Yama and William.

Yes, William, makes sense this is really a Rocks question. I'll post there
if I have further trouble.

And yes that's right, rocks does have installation scripts which I should
be able to modify, haven't looked at that in quite some time. Looking there
now, I was having trouble back then getting post-installation steps to run
so I'll have to ask for help on th Rocks list.

-M

On Wed, Nov 18, 2015 at 3:43 PM, Yama Nawabi (Elev) <
[email protected]> wrote:

> Hi Michael,
>
>
>
> (This information is RHEL specific, and I personally use xCAT, so some of
> this may need to be interpreted for ROCKS)
>
>
>
> Does ROCKS provide a kickstart template that you may be able to edit? In
> my xCAT configuration's kickstart template, I add the following after the
> selinux and reboot statements:
>
>
>
> repo --name=puppetlabs-products --baseurl=
> http://yum.puppetlabs.com/el/6/products/x86_64/
>
> repo --name=puppetlabs-deps --baseurl=
> http://yum.puppetlabs.com/el/6/dependencies/x86_64/
>
> services --enabled=puppet
>
>
>
> In the packages section, specify the puppet package, and all sge related
> packages that need to be installed. You may need to drop the sge packages
> in your x86_64/Packages/ directory and regenerate the repo using
> createrepo. Find out how ROCKS can be used to drop in a custom puppet.conf
> into /etc/puppet and prop up a host that will serve as the puppetmaster.
>
>
>
> Upon first boot, you will have a sge installation in /opt/sge and a puppet
> configuration.
>
>
>
> Use puppet to (I recommend using run-stages):
>
> 1. Mount whatever NFS mounts that are necessary for your shared cell 2.
> Ensure the execd rc file is placed in /etc/init.d 3. Chkconfig the service
> and ensure that it's started to ensure sge_execd is running 4. Ensure
> whatever ports are necessary are open (although I just used firewall
> --disabled in the KS template)
>
>
>
> Ensure that you have the hosts already configured on your qmaster. Hope
> this helps, let me know if you need more information.
>
>
>
> -Yama
>
>
>
> Yama
>
> Systems Engineer
>
> Institute for Neuroimaging and Informatics, Keck School of Medicine of USC
>
>
>
>
>
> *From:* [email protected] [mailto:[email protected]]
> *On Behalf Of *Michael Stauffer
> *Sent:* Tuesday, November 17, 2015 1:56 PM
> *To:* Gridengine Users Group <[email protected]>
> *Subject:* [gridengine users] how best to initialize a rocks node for SGE
> on boot?
>
>
>
> SoGE 8.1.8
>
> Hi,
>
> What's the proper way to automatically initialize SGE on a rocks-managed
> compute node after it's reinstalled? I recently upgraded my cluster from
> rocks 6.1 to 6.2, and in the process moved from OGS to SoGE 8.1.8. With
> that came a bunch of manual configuration steps for SoGE that rocks used to
> handle with its bundled distro of (old) OGS.
>
> I have a number of steps to perform after a node comes up after a
> reinstall. How should I do this to have it happen automatically? In an
> /etc/init.d script? How do I get it into the distro in the right place?
> Below are the steps I want to perform. Thanks for any advice.
>
> usermod -u 399 sgeadmin
>
> groupmod -g 399 sgeadmin
>
> echo "#manually added" >> /etc/fstab
>
> echo "<front-end>:/opt/sge                    /opt/sge    nfs
> defaults,noatime      0 0" >> /etc/fstab
>
> mount /opt/sge
>
>  . /etc/profile.d/cfn-sge-env.sh
>
> cp $SGE_ROOT/default/common/sgeexecd /etc/init.d/sgeexecd.<cluster-name>
>
> /usr/lib/lsb/install_initd /etc/init.d/sgeexecd.<cluster-name>
>
> service sgeexecd.<cluster-name> start
>
>
>
> -Michael
>
>
>
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to