Hi, I have upgraded slurm to 15.08 but getting below error.

cat slurmdbd.log
[2016-09-16T11:47:53.908] Accounting storage MYSQL plugin loaded
[2016-09-16T11:47:53.909] Not running as root. Can't drop supplementary
groups
[2016-09-16T11:47:53.912] slurmdbd version 15.08.12 started
[2016-09-16T11:48:04.889] DBD_CLUSTER_TRES: cluster not registered

[root@okdev1315 slurm]# cat slurmctld.log
[2016-09-16T11:47:53.835] Not running as root. Can't drop supplementary
groups
[2016-09-16T11:47:53.835] slurmctld version 15.08.12 started on cluster
cluster
[2016-09-16T11:47:53.839] error: Association database appears down, reading
from state file.
[2016-09-16T11:47:53.847] layouts: no layout to initialize
[2016-09-16T11:47:53.850] layouts: loading entities/relations information
[2016-09-16T11:47:53.850] Recovered state of 5 nodes
[2016-09-16T11:47:53.850] Recovered JobID=81205 State=0x1 NodeCnt=0 Assoc=16
[2016-09-16T11:47:53.850] Recovered JobID=81212 State=0x1 NodeCnt=0 Assoc=16
[2016-09-16T11:47:53.850] Recovered information about 2 jobs
[2016-09-16T11:47:53.850] cons_res: select_p_node_init
[2016-09-16T11:47:53.851] cons_res: preparing for 6 partitions
[2016-09-16T11:47:53.851] Recovered state of 0 reservations
[2016-09-16T11:47:53.851] read_slurm_conf: backup_controller not specified.
[2016-09-16T11:47:53.851] cons_res: select_p_reconfigure
[2016-09-16T11:47:53.851] cons_res: select_p_node_init
[2016-09-16T11:47:53.851] cons_res: preparing for 6 partitions
[2016-09-16T11:47:53.851] Running as primary controller
[2016-09-16T11:47:53.851] Registering slurmctld at port 6827 with slurmdbd.
[2016-09-16T11:47:53.852] Recovered information about 0 sicp jobs
[2016-09-16T11:47:53.936] error: Setting node okdev1314 state to DRAIN
[2016-09-16T11:47:53.936] drain_nodes: node okdev1314 state set to DRAIN
[2016-09-16T11:47:53.936] error: _slurm_rpc_node_registration
node=okdev1314: Invalid argument
[2016-09-16T11:47:54.110] error: Setting node okdev1324 state to DRAIN
[2016-09-16T11:47:54.110] drain_nodes: node okdev1324 state set to DRAIN
[2016-09-16T11:47:54.110] error: _slurm_rpc_node_registration
node=okdev1324: Invalid argument
[2016-09-16T11:47:54.226] error: Setting node okdev1447 state to DRAIN
[2016-09-16T11:47:54.226] drain_nodes: node okdev1447 state set to DRAIN
[2016-09-16T11:47:54.226] error: _slurm_rpc_node_registration
node=okdev1447: Invalid argument
[2016-09-16T11:47:54.373] error: Setting node okdev1367 state to DRAIN
[2016-09-16T11:47:54.373] drain_nodes: node okdev1367 state set to DRAIN
[2016-09-16T11:47:54.373] error: _slurm_rpc_node_registration
node=okdev1367: Invalid argument
[2016-09-16T11:47:54.564] error: Setting node okdev1368 state to DRAIN
[2016-09-16T11:47:54.564] drain_nodes: node okdev1368 state set to DRAIN
[2016-09-16T11:47:54.564] error: _slurm_rpc_node_registration
node=okdev1368: Invalid argument
[2016-09-16T11:48:40.495] _slurm_rpc_kill_job2: REQUEST_KILL_JOB job 81212
uid 11510
[2016-09-16T11:48:40.495] _slurm_rpc_kill_job2: REQUEST_KILL_JOB job 81205
uid 11510

Any idea what is missing?


Thanks & Regards,
Balaji Deivam
Staff Analyst - Business Data Center
Seagate Technology - 389 Disc Drive, Longmont, CO 80503 | 720-684-
<720-684-2363>*3395*

On Sat, Aug 20, 2016 at 12:10 AM, Barbara Krasovec <barba...@arnes.si>
wrote:

> Check, which are installed:
> rpm -qa | grep slurm
>
> Probably slurm, slurm-munge and slurm-plugins.
>
> Cheers,
> Barbara
>
>
> On 19/08/16 22:54, Balaji Deivam wrote:
>
> Sorry. Its for compute nodes.. (Not on Control nodes)
>
> Thanks & Regards,
> Balaji Deivam
> Staff Analyst - Business Data Center
> Seagate Technology - 389 Disc Drive, Longmont, CO 80503 | 720-684-
> <720-684-2363>*3395*
>
> On Fri, Aug 19, 2016 at 2:52 PM, Balaji Deivam <balaji.dei...@seagate.com>
> wrote:
>
>> Thanks. What are the components have to upgrade in the slurm control
>> nodes?
>>
>> Thanks & Regards,
>> Balaji Deivam
>> Staff Analyst - Business Data Center
>> Seagate Technology - 389 Disc Drive, Longmont, CO 80503 | 720-684-
>> <720-684-2363>*3395*
>>
>> On Thu, Aug 18, 2016 at 2:45 PM, Barbara Krasovec <barba...@arnes.si>
>> wrote:
>>
>>> Well, if you're doing the upgrade of already installed packages, you can
>>> do:
>>>
>>> yum update slurm-sql slurm-munge slurm-slurmdbd
>>>
>>> or
>>>
>>> rpm -Uvh ....
>>> (the U switch is for upgrade of already installed packages)
>>>
>>> Cheers,
>>> Barbara
>>>
>>>
>>> On 18/08/16 19:50, Balaji Deivam wrote:
>>>
>>> Thanks for your response.
>>>
>>> I have build the RPMs and got below files generated. Then installed
>>> those 3 rpms alone which you have mentioned. Is this right?
>>>
>>> -rw-r----- 1 root root 25680316 Aug 18 12:44
>>> slurm-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root   451160 Aug 18 12:44
>>> slurm-perlapi-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root   117380 Aug 18 12:44
>>> slurm-devel-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root    16024 Aug 18 12:44
>>> slurm-munge-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root  1061536 Aug 18 12:44
>>> slurm-slurmdbd-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root   281868 Aug 18 12:44
>>> slurm-sql-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root  1140588 Aug 18 12:44
>>> slurm-plugins-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root    35760 Aug 18 12:44
>>> slurm-torque-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root     7100 Aug 18 12:44
>>> slurm-sjobexit-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root     6260 Aug 18 12:44
>>> slurm-slurmdb-direct-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root    10420 Aug 18 12:44
>>> slurm-sjstat-15.08.12-1.el6.x86_64.rpm
>>> -rw-r----- 1 root root    35696 Aug 18 12:44
>>> slurm-pam_slurm-15.08.12-1.el6.x86_64.rpm
>>> [root@cloudlg017223 x86_64]# pwd
>>> */root/rpmbuild/RPMS/x86_64*
>>> [root@cloudlg017223 x86_64]#
>>>
>>>
>>> [root@cloudlg017223 x86_64]#* rpm -ivh
>>> slurm-slurmdbd-15.08.12-1.el6.x86_64.rpm
>>> slurm-munge-15.08.12-1.el6.x86_64.rpm slurm-sql-15.08.12-1.el6.x86_64.rpm*
>>> Preparing...                ###########################################
>>> [100%]
>>>         file /apps/slurm/lib64/slurm/accounting_storage_mysql.so from
>>> install of slurm-sql-15.08.12-1.el6.x86_64 conflicts with file from
>>> package slurm-sql-14.11.3-1.el6.x86_64
>>>         file /apps/slurm/lib64/slurm/jobcomp_mysql.so from install of
>>> slurm-sql-15.08.12-1.el6.x86_64 conflicts with file from package
>>> slurm-sql-14.11.3-1.el6.x86_64
>>>         file /apps/slurm/sbin/slurmdbd from install of
>>> slurm-slurmdbd-15.08.12-1.el6.x86_64 conflicts with file from package
>>> slurm-slurmdbd-14.11.3-1.el6.x86_64
>>>         file /apps/slurm/share/man/man5/slurmdbd.conf.5 from install of
>>> slurm-slurmdbd-15.08.12-1.el6.x86_64 conflicts with file from package
>>> slurm-slurmdbd-14.11.3-1.el6.x86_64
>>>         file /apps/slurm/share/man/man8/slurmdbd.8 from install of
>>> slurm-slurmdbd-15.08.12-1.el6.x86_64 conflicts with file from package
>>> slurm-slurmdbd-14.11.3-1.el6.x86_64
>>>         file /apps/slurm/lib64/slurm/auth_munge.so from install of
>>> slurm-munge-15.08.12-1.el6.x86_64 conflicts with file from package
>>> slurm-munge-14.11.3-1.el6.x86_64
>>>         file /apps/slurm/lib64/slurm/crypto_munge.so from install of
>>> slurm-munge-15.08.12-1.el6.x86_64 conflicts with file from package
>>> slurm-munge-14.11.3-1.el6.x86_64
>>> [root@cloudlg017223 x86_64]#
>>>
>>>
>>>
>>>
>>> Thanks & Regards,
>>> Balaji Deivam
>>> Staff Analyst - Business Data Center
>>> Seagate Technology - 389 Disc Drive, Longmont, CO 80503 | 720-684-
>>> <720-684-2363>*3395*
>>>
>>> On Thu, Aug 18, 2016 at 4:57 AM, Barbara Krasovec <barba...@arnes.si>
>>> wrote:
>>>
>>>> Helllo!
>>>>
>>>>
>>>> On 17/08/16 23:59, Balaji Deivam wrote:
>>>>
>>>>
>>>> Hi,
>>>>
>>>>
>>>>> Can someone give me the detailed step on "Upgrade the slurmdbd
>>>>> daemon" ?
>>>>
>>>>
>>>> I have downloaded the Slrum source tar file and looking for how to
>>>> upgrade only the slurmdbd from that tar file.
>>>>
>>>>
>>>>
>>>> Thanks & Regards,
>>>> Balaji Deivam
>>>> Staff Analyst - Business Data Center
>>>> Seagate Technology - 389 Disc Drive, Longmont, CO 80503 | 720-684-
>>>> <720-684-2363>*3395*
>>>>
>>>> Did you mention the operating system you are using? In my case, I used
>>>> rpm-s, so i have built rpms for the new version and then upgraded the
>>>> following packages: slurm-slurmdbd, slurm-munge, slurm-sql
>>>>
>>>> Instructions on how to build/install SLURM:
>>>> http://slurm.schedmd.com/quickstart_admin.html
>>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__slurm.schedmd.com_quickstart-5Fadmin.html&d=DQMDaQ&c=IGDlg0lD0b-nebmJJ0Kp8A&r=6hUlkHlchkV1FY0jPZY7LgxwDqm80x8V4CQQBznS7Fo&m=ecrU6VEAB1VaaMp-iTJZecPij8IFGqVqHRux1BHwlkQ&s=q_sl6yUpOiwI6nLDqRQEUb1asvgqPHNkZNSUgxhm5og&e=>
>>>>
>>>> Cheers,
>>>>
>>>> Barbara
>>>>
>>>
>>>
>>>
>>
>
>

Reply via email to