The method I have used is to 1) edit ceph.conf, 2) use ceph-deploy config
push, 3) restart monitors

Example:
roger@desktop:~/ceph-cluster$ vi ceph.conf    # make ceph.conf change
roger@desktop:~/ceph-cluster$ ceph-deploy --overwrite-conf config push
nuc{1..3}
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/roger/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy
--overwrite-conf config push nuc1 nuc2 nuc3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : push
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
<ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbb393de3b0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['nuc1', 'nuc2',
'nuc3']
[ceph_deploy.cli][INFO  ]  func                          : <function config
at 0x7fbb396299b0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.config][DEBUG ] Pushing config to nuc1
[nuc1][DEBUG ] connection detected need for sudo
[nuc1][DEBUG ] connected to host: nuc1
[nuc1][DEBUG ] detect platform information from remote host
[nuc1][DEBUG ] detect machine type
[nuc1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to nuc2
[nuc2][DEBUG ] connection detected need for sudo
[nuc2][DEBUG ] connected to host: nuc2
[nuc2][DEBUG ] detect platform information from remote host
[nuc2][DEBUG ] detect machine type
[nuc2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.config][DEBUG ] Pushing config to nuc3
[nuc3][DEBUG ] connection detected need for sudo
[nuc3][DEBUG ] connected to host: nuc3
[nuc3][DEBUG ] detect platform information from remote host
[nuc3][DEBUG ] detect machine type
[nuc3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
roger@desktop:~/ceph-cluster$ ssh nuc1
roger@nuc1:~$ sudo systemctl restart ceph-mon.target
roger@nuc1:~$
...etc.


On Mon, Jul 24, 2017 at 11:34 AM moftah moftah <moft...@gmail.com> wrote:

> Hi
>
> I am having hard time finding documentation on what is the correct way to
> upgrade ceph.conf in running cluster.
>
> The change i want to introduce is this
> osd crush update on start = false
>
> i tried to do it through the tell utility like this
> ceph tell osd.82 injectargs --no-osd-crush-update-on-start
>
> the answer was
> osd_crush_update_on_start = 'false' (unchangeable)
>
> Now it seems i need to reboot somthing to get this new config alive
> I find it strange that i have to reboot all OSDs processes int he system
> just to update the config
>
> is there a procedure for this
> and can i just reboot the mon process on the mon nodes ?
>
> Thanks
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to