Mark,

Thank you for steering me in the right direction! After fixing the bad key
in /var/lib/ceph/bootstrap-mgr/ceph.keyring at what will be the mgr hosts,
I updated the caps and then was able to deploy mgr successfully. Curiously,
`ceph -s` shows only one of the three mgr deployed as running.

History follows for deploying mgr to 3 hosts.

# First host
roger@desktop:~/ceph-cluster$ ceph auth get mgr.nuc1
exported keyring for mgr.nuc1
[mgr.nuc1]
key = AQAt549YwRXPKBAA3eLp6MAORuVs12rKo4onog==
caps mon = "allow profile mgr"
roger@desktop:~/ceph-cluster$ ceph auth caps mgr.nuc1 mon 'allow profile
mgr' osd 'allow *' mds 'allow *' mgr 'allow r'
updated caps for mgr.nuc1
roger@desktop:~/ceph-cluster$ ceph auth get mgr.nuc1
exported keyring for mgr.nuc1
[mgr.nuc1]
key = AQAt549YwRXPKBAA3eLp6MAORuVs12rKo4onog==
caps mds = "allow *"
caps mgr = "allow r"
caps mon = "allow profile mgr"
caps osd = "allow *"
roger@desktop:~/ceph-cluster$ ceph-deploy -v mgr create nuc1
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/roger/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy -v mgr
create nuc1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : True
[ceph_deploy.cli][INFO  ]  mgr                           : [('nuc1',
'nuc1')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
<ceph_deploy.conf.cephdeploy.Conf instance at 0x7fafe77b0c20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at
0x7fafe7e27668>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts nuc1:nuc1
[nuc1][DEBUG ] connection detected need for sudo
[nuc1][DEBUG ] connected to host: nuc1
[nuc1][DEBUG ] detect platform information from remote host
[nuc1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to nuc1
[nuc1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[nuc1][DEBUG ] create path if it doesn't exist
[nuc1][INFO  ] Running command: sudo ceph --cluster ceph --name
client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
auth get-or-create mgr.nuc1 mon allow profile mgr osd allow * mds allow *
-o /var/lib/ceph/mgr/ceph-nuc1/keyring
[nuc1][INFO  ] Running command: sudo systemctl enable ceph-mgr@nuc1
[nuc1][WARNIN] Created symlink from
/etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@nuc1.service to
/lib/systemd/system/ceph-mgr@.service.
[nuc1][INFO  ] Running command: sudo systemctl start ceph-mgr@nuc1
[nuc1][INFO  ] Running command: sudo systemctl enable ceph.target

# Second host
roger@desktop:~/ceph-cluster$ ssh nuc2
roger@nuc2:~$ sudo cat /var/lib/ceph/bootstrap-mgr/ceph.keyring
[client.bootstrap-mgr]
key = AAAAAAAAAAAAAAAA
caps mon = "allow profile bootstrap-mgr"
roger@nuc2:~$ sudo sed -i
's/AAAAAAAAAAAAAAAA/AQBDt3RZPXdpNBAAekynuNJpVPaN1B4YTeFu4w==/'
/var/lib/ceph/bootstrap-mgr/ceph.keyring
roger@nuc2:~$ sudo cat /var/lib/ceph/bootstrap-mgr/ceph.keyring
[client.bootstrap-mgr]
key = AQBDt3RZPXdpNBAAekynuNJpVPaN1B4YTeFu4w==
caps mon = "allow profile bootstrap-mgr"
roger@nuc2:~$ logout
Connection to nuc2 closed.
roger@desktop:~/ceph-cluster$ ceph auth get mgr.nuc2
exported keyring for mgr.nuc2
[mgr.nuc2]
key = AQA1549Yr5GFCxAAMKg8ynVwNfWDd4JoRgFbUg==
caps mon = "allow profile mgr"
roger@desktop:~/ceph-cluster$ ceph auth caps mgr.nuc2 mon 'allow profile
mgr' osd 'allow *' mds 'allow *' mgr 'allow r'
updated caps for mgr.nuc2
roger@desktop:~/ceph-cluster$ ceph auth get mgr.nuc2
exported keyring for mgr.nuc2
[mgr.nuc2]
key = AQA1549Yr5GFCxAAMKg8ynVwNfWDd4JoRgFbUg==
caps mds = "allow *"
caps mgr = "allow r"
caps mon = "allow profile mgr"
caps osd = "allow *"
roger@desktop:~/ceph-cluster$ ceph-deploy -v mgr create nuc2
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/roger/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy -v mgr
create nuc2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : True
[ceph_deploy.cli][INFO  ]  mgr                           : [('nuc2',
'nuc2')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
<ceph_deploy.conf.cephdeploy.Conf instance at 0x7f966eb7cc20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at
0x7f966f1f3668>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts nuc2:nuc2
[nuc2][DEBUG ] connection detected need for sudo
[nuc2][DEBUG ] connected to host: nuc2
[nuc2][DEBUG ] detect platform information from remote host
[nuc2][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to nuc2
[nuc2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[nuc2][DEBUG ] create path if it doesn't exist
[nuc2][INFO  ] Running command: sudo ceph --cluster ceph --name
client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
auth get-or-create mgr.nuc2 mon allow profile mgr osd allow * mds allow *
-o /var/lib/ceph/mgr/ceph-nuc2/keyring
[nuc2][INFO  ] Running command: sudo systemctl enable ceph-mgr@nuc2
[nuc2][WARNIN] Created symlink from
/etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@nuc2.service to
/lib/systemd/system/ceph-mgr@.service.
[nuc2][INFO  ] Running command: sudo systemctl start ceph-mgr@nuc2
[nuc2][INFO  ] Running command: sudo systemctl enable ceph.target

# Third host
roger@desktop:~/ceph-cluster$ scp ceph.bootstrap-mgr.keyring nuc3:~
ceph.bootstrap-mgr.keyring    100%  113     0.1KB/s   00:00
roger@desktop:~/ceph-cluster$ ssh nuc3
roger@nuc3:~$ sudo cat /var/lib/ceph/bootstrap-mgr/ceph.keyring
[client.bootstrap-mgr]
key = AAAAAAAAAAAAAAAA
caps mon = "allow profile bootstrap-mgr"
roger@nuc3:~$ cat ceph.bootstrap-mgr.keyring
[client.bootstrap-mgr]
key = AQBDt3RZPXdpNBAAekynuNJpVPaN1B4YTeFu4w==
caps mon = "allow profile bootstrap-mgr"
roger@nuc3:~$ sudo cp ceph.bootstrap-mgr.keyring
/var/lib/ceph/bootstrap-mgr/ceph.keyring
roger@nuc3:~$ logout
Connection to nuc3 closed.
roger@desktop:~/ceph-cluster$ ceph auth get mgr.nuc3
exported keyring for mgr.nuc3
[mgr.nuc3]
key = AQA8549Y04E1GBAA7Hk7vJVOE6vFDZyaecFslg==
caps mon = "allow profile mgr"
roger@desktop:~/ceph-cluster$ ceph auth caps mgr.nuc3 mon 'allow profile
mgr' osd 'allow *' mds 'allow *' mgr 'allow r'
updated caps for mgr.nuc3
roger@desktop:~/ceph-cluster$ ceph auth get mgr.nuc3
exported keyring for mgr.nuc3
[mgr.nuc3]
key = AQA8549Y04E1GBAA7Hk7vJVOE6vFDZyaecFslg==
caps mds = "allow *"
caps mgr = "allow r"
caps mon = "allow profile mgr"
caps osd = "allow *"
roger@desktop:~/ceph-cluster$ ceph-deploy -v mgr create nuc3
[ceph_deploy.conf][DEBUG ] found configuration file at:
/home/roger/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy -v mgr
create nuc3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : True
[ceph_deploy.cli][INFO  ]  mgr                           : [('nuc3',
'nuc3')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :
<ceph_deploy.conf.cephdeploy.Conf instance at 0x7f2dbf41fc20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at
0x7f2dbfa96668>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts nuc3:nuc3
[nuc3][DEBUG ] connection detected need for sudo
[nuc3][DEBUG ] connected to host: nuc3
[nuc3][DEBUG ] detect platform information from remote host
[nuc3][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to nuc3
[nuc3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[nuc3][DEBUG ] create path if it doesn't exist
[nuc3][INFO  ] Running command: sudo ceph --cluster ceph --name
client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring
auth get-or-create mgr.nuc3 mon allow profile mgr osd allow * mds allow *
-o /var/lib/ceph/mgr/ceph-nuc3/keyring
[nuc3][INFO  ] Running command: sudo systemctl enable ceph-mgr@nuc3
[nuc3][WARNIN] Created symlink from
/etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@nuc3.service to
/lib/systemd/system/ceph-mgr@.service.
[nuc3][INFO  ] Running command: sudo systemctl start ceph-mgr@nuc3
[nuc3][INFO  ] Running command: sudo systemctl enable ceph.target

# Status
roger@desktop:~/ceph-cluster$ ceph -s
...
  services:
    mon: 3 daemons, quorum nuc1,nuc2,nuc3
    mgr: nuc1(active)


On Sun, Jul 23, 2017 at 6:38 PM Mark Kirkwood <mark.kirkw...@catalyst.net.nz>
wrote:

> Ahhh - probably my fault that, sorry.
>
> Where I have:
>
> $ sudo ceph auth get-or-create client.bootstrap-mgr mon 'allow profile
> bootstrap-mgr'
>
> I should have:
>
> $ sudo ceph auth get-or-create client.bootstrap-mgr mon 'allow profile
> bootstrap-mgr' > /var/lib/ceph/bootstrap-mgr/ceph.keyring
>
> or something similar - i.e better fix up that file to match the new key!
>
> Cheers
>
> Mark
>
>
> On 24/07/17 12:26, Roger Brown wrote:
> > Looks like that also had the same bad key...
> >
> > roger@nuc1:~$ sudo cat /var/lib/ceph/bootstrap-mgr/ceph.keyring
> > [client.bootstrap-mgr]
> > key = AAAAAAAAAAAAAAAA
> > caps mon = "allow profile bootstrap-mgr"
> >
> >
> > On Sun, Jul 23, 2017 at 5:16 PM Mark Kirkwood
> > <mark.kirkw...@catalyst.net.nz <mailto:mark.kirkw...@catalyst.net.nz>>
> > wrote:
> >
> >     Hmmm, not seen that here.
> >
> >      From the error message it does not seem to like
> >     /var/lib/ceph/bootstrap-mgr/ceph.keyring - what does the contents of
> >     that look like?
> >
> >     regards
> >
> >     Mark
> >     On 24/07/17 03:09, Roger Brown wrote:
> >     > Mark,
> >     >
> >     > Thanks for that information. I can't seem to deploy ceph-mgr
> >     either. I
> >     > also have the busted mgr bootstrap key. I attempted the
> >     suggested fix,
> >     > but my issue may be different somehow. Complete output follows.
> >     > -Roger
> >     >
> >     > roger@desktop:~$ ceph-deploy --version
> >     > 1.5.38
> >     > roger@desktop:~$ ceph mon versions
> >     > {
> >     >     "ceph version 12.1.1 (f3e663a190bf2ed12c7e3cda288b9a159572c800)
> >     > luminous (rc)": 3
> >     > }
> >     > roger@desktop:~/ceph-cluster$ sudo ceph auth get
> >     client.bootstrap-mgr
> >     > exported keyring for client.bootstrap-mgr
> >     > [client.bootstrap-mgr]
> >     > key = AAAAAAAAAAAAAAAA
> >     > caps mon = "allow profile bootstrap-mgr"
> >     > roger@desktop:~/ceph-cluster$ sudo ceph auth del
> >     client.bootstrap-mgr
> >     > updated
> >     > roger@desktop:~/ceph-cluster$ sudo ceph auth get
> >     client.bootstrap-mgr
> >     > Error ENOENT: failed to find client.bootstrap-mgr in keyring
> >     > roger@desktop:~/ceph-cluster$ sudo ceph auth get-or-create
> >     > client.bootstrap-mgr mon 'allow profile bootstrap-mgr'
> >     > [client.bootstrap-mgr]
> >     > key = AQBDt3RZPXdpNBAAekynuNJpVPaN1B4YTeFu4w==
> >     > roger@desktop:~/ceph-cluster$ ceph-deploy -v gatherkeys nuc1
> >     > [ceph_deploy.conf][DEBUG ] found configuration file at:
> >     > /home/roger/.cephdeploy.conf
> >     > [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy -v
> >     > gatherkeys nuc1
> >     > [ceph_deploy.cli][INFO  ] ceph-deploy options:
> >     > [ceph_deploy.cli][INFO  ]  username    : None
> >     > [ceph_deploy.cli][INFO  ]  verbose     : True
> >     > [ceph_deploy.cli][INFO  ]  overwrite_conf    : False
> >     > [ceph_deploy.cli][INFO  ]  quiet     : False
> >     > [ceph_deploy.cli][INFO  ]  cd_conf     :
> >     > <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4ec6dd2bd8>
> >     > [ceph_deploy.cli][INFO  ]  cluster     : ceph
> >     > [ceph_deploy.cli][INFO  ]  mon     : ['nuc1']
> >     > [ceph_deploy.cli][INFO  ]  func    : <function gatherkeys at
> >     > 0x7f4ec6da1050>
> >     > [ceph_deploy.cli][INFO  ]  ceph_conf     : None
> >     > [ceph_deploy.cli][INFO  ]  default_release     : False
> >     > [ceph_deploy.gatherkeys][INFO  ] Storing keys in temp directory
> >     > /tmp/tmpdhkYYM
> >     > [nuc1][DEBUG ] connection detected need for sudo
> >     > [nuc1][DEBUG ] connected to host: nuc1
> >     > [nuc1][DEBUG ] detect platform information from remote host
> >     > [nuc1][DEBUG ] detect machine type
> >     > [nuc1][DEBUG ] get remote short hostname
> >     > [nuc1][DEBUG ] fetch remote file
> >     > [nuc1][INFO  ] Running command: sudo /usr/bin/ceph
> >     > --connect-timeout=25 --cluster=ceph
> >     > --admin-daemon=/var/run/ceph/ceph-mon.nuc1.asok mon_status
> >     > [nuc1][INFO  ] Running command: sudo /usr/bin/ceph
> >     > --connect-timeout=25 --cluster=ceph --name mon.
> >     > --keyring=/var/lib/ceph/mon/ceph-nuc1/keyring auth get client.admin
> >     > [nuc1][INFO  ] Running command: sudo /usr/bin/ceph
> >     > --connect-timeout=25 --cluster=ceph --name mon.
> >     > --keyring=/var/lib/ceph/mon/ceph-nuc1/keyring auth get
> >     > client.bootstrap-mds
> >     > [nuc1][INFO  ] Running command: sudo /usr/bin/ceph
> >     > --connect-timeout=25 --cluster=ceph --name mon.
> >     > --keyring=/var/lib/ceph/mon/ceph-nuc1/keyring auth get
> >     > client.bootstrap-mgr
> >     > [nuc1][INFO  ] Running command: sudo /usr/bin/ceph
> >     > --connect-timeout=25 --cluster=ceph --name mon.
> >     > --keyring=/var/lib/ceph/mon/ceph-nuc1/keyring auth get
> >     > client.bootstrap-osd
> >     > [nuc1][INFO  ] Running command: sudo /usr/bin/ceph
> >     > --connect-timeout=25 --cluster=ceph --name mon.
> >     > --keyring=/var/lib/ceph/mon/ceph-nuc1/keyring auth get
> >     > client.bootstrap-rgw
> >     > [ceph_deploy.gatherkeys][INFO  ] keyring
> 'ceph.client.admin.keyring'
> >     > already exists
> >     > [ceph_deploy.gatherkeys][INFO  ] keyring
> >     'ceph.bootstrap-mds.keyring'
> >     > already exists
> >     > [ceph_deploy.gatherkeys][INFO  ] Replacing
> >     > 'ceph.bootstrap-mgr.keyring' and backing up old key as
> >     > 'ceph.bootstrap-mgr.keyring-20170723085013'
> >     > [ceph_deploy.gatherkeys][INFO  ] keyring 'ceph.mon.keyring'
> >     already exists
> >     > [ceph_deploy.gatherkeys][INFO  ] keyring
> >     'ceph.bootstrap-osd.keyring'
> >     > already exists
> >     > [ceph_deploy.gatherkeys][INFO  ] keyring
> >     'ceph.bootstrap-rgw.keyring'
> >     > already exists
> >     > [ceph_deploy.gatherkeys][INFO  ] Destroy temp directory
> >     /tmp/tmpdhkYYM
> >     > roger@desktop:~/ceph-cluster$ cat ceph.bootstrap-mgr.keyring
> >     > [client.bootstrap-mgr]
> >     > key = AQBDt3RZPXdpNBAAekynuNJpVPaN1B4YTeFu4w==
> >     > caps mon = "allow profile bootstrap-mgr"
> >     > roger@desktop:~/ceph-cluster$ cat
> >     > ceph.bootstrap-mgr.keyring-20170723085013
> >     > [client.bootstrap-mgr]
> >     > key = AAAAAAAAAAAAAAAA
> >     > caps mon = "allow profile bootstrap-mgr"
> >     > roger@desktop:~/ceph-cluster$ ceph-deploy -v mgr create nuc1
> >     > [ceph_deploy.conf][DEBUG ] found configuration file at:
> >     > /home/roger/.cephdeploy.conf
> >     > [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /usr/bin/ceph-deploy -v
> >     > mgr create nuc1
> >     > [ceph_deploy.cli][INFO  ] ceph-deploy options:
> >     > [ceph_deploy.cli][INFO  ]  username    : None
> >     > [ceph_deploy.cli][INFO  ]  verbose     : True
> >     > [ceph_deploy.cli][INFO  ]  mgr     : [('nuc1', 'nuc1')]
> >     > [ceph_deploy.cli][INFO  ]  overwrite_conf    : False
> >     > [ceph_deploy.cli][INFO  ]  subcommand    : create
> >     > [ceph_deploy.cli][INFO  ]  quiet     : False
> >     > [ceph_deploy.cli][INFO  ]  cd_conf     :
> >     > <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f410776cc20>
> >     > [ceph_deploy.cli][INFO  ]  cluster     : ceph
> >     > [ceph_deploy.cli][INFO  ]  func    : <function mgr at
> >     0x7f4107de3668>
> >     > [ceph_deploy.cli][INFO  ]  ceph_conf     : None
> >     > [ceph_deploy.cli][INFO  ]  default_release     : False
> >     > [ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts
> >     nuc1:nuc1
> >     > [nuc1][DEBUG ] connection detected need for sudo
> >     > [nuc1][DEBUG ] connected to host: nuc1
> >     > [nuc1][DEBUG ] detect platform information from remote host
> >     > [nuc1][DEBUG ] detect machine type
> >     > [ceph_deploy.mgr][INFO  ] Distro info: Ubuntu 16.04 xenial
> >     > [ceph_deploy.mgr][DEBUG ] remote host will use systemd
> >     > [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to nuc1
> >     > [nuc1][DEBUG ] write cluster configuration to
> >     /etc/ceph/{cluster}.conf
> >     > [nuc1][DEBUG ] create path if it doesn't exist
> >     > [nuc1][INFO  ] Running command: sudo ceph --cluster ceph --name
> >     > client.bootstrap-mgr --keyring
> >     > /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create
> mgr.nuc1
> >     > mon allow profile mgr osd allow * mds allow * -o
> >     > /var/lib/ceph/mgr/ceph-nuc1/keyring
> >     > [nuc1][ERROR ] 2017-07-23 14:51:13.413218 7f62943cc700  0 librados:
> >     > client.bootstrap-mgr authentication error (22) Invalid argument
> >     > [nuc1][ERROR ] InvalidArgumentError does not take keyword arguments
> >     > [nuc1][ERROR ] exit code from command was: 1
> >     > [ceph_deploy.mgr][ERROR ] could not create mgr
> >     > [ceph_deploy][ERROR ] GenericError: Failed to create 1 MGRs
> >     >
> >     > roger@desktop:~/ceph-cluster$
> >     >
> >     >
> >     >
> >     > On Sun, Jul 23, 2017 at 1:17 AM Mark Kirkwood
> >     > <mark.kirkw...@catalyst.net.nz
> >     <mailto:mark.kirkw...@catalyst.net.nz>
> >     <mailto:mark.kirkw...@catalyst.net.nz
> >     <mailto:mark.kirkw...@catalyst.net.nz>>>
> >     > wrote:
> >     >
> >     >     On 22/07/17 23:50, Oscar Segarra wrote:
> >     >
> >     >     > Hi,
> >     >     >
> >     >     > I have upgraded from kraken version with a simple "yum
> upgrade
> >     >     > command". Later the upgrade, I'd like to deploy the mgr
> daemon
> >     >     on one
> >     >     > node of my ceph infrastrucute.
> >     >     >
> >     >     > But, for any reason, It gets stuck!
> >     >     >
> >     >     > Let's see the complete set of commands:
> >     >     >
> >     >     >
> >     >     > [root@vdicnode01 ~]# ceph -s
> >     >     >   cluster:
> >     >     >     id:     656e84b2-9192-40fe-9b81-39bd0c7a3196
> >     >     >     health: HEALTH_WARN
> >     >     > *            no active mgr*
> >     >     >
> >     >     >   services:
> >     >     >     mon: 1 daemons, quorum vdicnode01
> >     >     >     mgr: no daemons active
> >     >     >     osd: 2 osds: 2 up, 2 in
> >     >     >
> >     >     >   data:
> >     >     >     pools:   0 pools, 0 pgs
> >     >     >     objects: 0 objects, 0 bytes
> >     >     >     usage:   0 kB used, 0 kB / 0 kB avail
> >     >     >     pgs:
> >     >     >
> >     >     > [root@vdicnode01 ~]# su - vdicceph
> >     >     > Last login: Sat Jul 22 12:50:38 CEST 2017 on pts/0
> >     >     > [vdicceph@vdicnode01 ~]$ cd ceph
> >     >     >
> >     >     > *[vdicceph@vdicnode01 ceph]$ ceph-deploy --username
> >     vdicceph -v mgr
> >     >     > create vdicnode02.local*
> >     >     > [ceph_deploy.conf][DEBUG ] found configuration file at:
> >     >     > /home/vdicceph/.cephdeploy.conf
> >     >     > [ceph_deploy.cli][INFO  ] Invoked (1.5.38): /bin/ceph-deploy
> >     >     > --username vdicceph -v mgr create vdicnode02.local
> >     >     > [ceph_deploy.cli][INFO  ] ceph-deploy options:
> >     >     > [ceph_deploy.cli][INFO  ]  username       : vdicceph
> >     >     > [ceph_deploy.cli][INFO  ]  verbose      : True
> >     >     > [ceph_deploy.cli][INFO  ]  mgr      :
> >     >     > [('vdicnode02.local', 'vdicnode02.local')]
> >     >     > [ceph_deploy.cli][INFO  ]  overwrite_conf  : False
> >     >     > [ceph_deploy.cli][INFO  ]  subcommand       : create
> >     >     > [ceph_deploy.cli][INFO  ]  quiet      : False
> >     >     > [ceph_deploy.cli][INFO  ]  cd_conf      :
> >     >     > <ceph_deploy.conf.cephdeploy.Conf instance at 0x164f290>
> >     >     > [ceph_deploy.cli][INFO  ]  cluster      : ceph
> >     >     > [ceph_deploy.cli][INFO  ]  func       : <function
> >     >     > mgr at 0x15db848>
> >     >     > [ceph_deploy.cli][INFO  ]  ceph_conf      : None
> >     >     > [ceph_deploy.cli][INFO  ]  default_release : False
> >     >     > [ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts
> >     >     > vdicnode02.local:vdicnode02.local
> >     >     > [vdicnode02.local][DEBUG ] connection detected need for sudo
> >     >     > [vdicnode02.local][DEBUG ] connected to host:
> >     >     vdicceph@vdicnode02.local
> >     >     > [vdicnode02.local][DEBUG ] detect platform information from
> >     >     remote host
> >     >     > [vdicnode02.local][DEBUG ] detect machine type
> >     >     > [ceph_deploy.mgr][INFO  ] Distro info: CentOS Linux
> >     7.3.1611 Core
> >     >     > [ceph_deploy.mgr][DEBUG ] remote host will use systemd
> >     >     > [ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to
> >     >     vdicnode02.local
> >     >     > [vdicnode02.local][DEBUG ] write cluster configuration to
> >     >     > /etc/ceph/{cluster}.conf
> >     >     > [vdicnode02.local][DEBUG ] create path if it doesn't exist
> >     >     > [vdicnode02.local][INFO  ] Running command: sudo ceph
> >     --cluster ceph
> >     >     > --name client.bootstrap-mgr --keyring
> >     >     > /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create
> >     >     > mgr.vdicnode02.local mon allow profile mgr osd allow * mds
> >     allow
> >     >     * -o
> >     >     > /var/lib/ceph/mgr/ceph-vdicnode02.local/keyring
> >     >     > [vdicnode02.local][WARNIN] No data was received after 300
> >     seconds,
> >     >     > disconnecting...
> >     >     > [vdicnode02.local][INFO  ] Running command: sudo systemctl
> >     enable
> >     >     > ceph-mgr@vdicnode02.local
> >     >     > [vdicnode02.local][WARNIN] Created symlink from
> >     >     >
> >     >
> >
> /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@vdicnode02.local.service
> >     >     > to /usr/lib/systemd/system/ceph-mgr@.service.
> >     >     > [vdicnode02.local][INFO  ] Running command: sudo systemctl
> >     start
> >     >     > ceph-mgr@vdicnode02.local
> >     >     > [vdicnode02.local][INFO  ] Running command: sudo systemctl
> >     enable
> >     >     > ceph.target
> >     >     >
> >     >     > *[vdicceph@vdicnode01 ceph]$ sudo ceph -s --verbose
> >     --watch-warn
> >     >     > --watch-error*
> >     >     > parsed_args: Namespace(admin_socket=None,
> >     admin_socket_nope=None,
> >     >     > cephconf=None, client_id=None, client_name=None,
> cluster=None,
> >     >     > cluster_timeout=None, completion=False, help=False,
> >     input_file=None,
> >     >     > output_file=None, output_format=None, status=True,
> >     verbose=True,
> >     >     > version=False, watch=False, watch_channel='cluster',
> >     >     > watch_debug=False, watch_error=True, watch_info=False,
> >     >     > watch_sec=False, watch_warn=True), childargs: []
> >     >     >
> >     >     > < no response for ever >
> >     >     >
> >     >     > Anybody has experienced the same issue? how can I make my
> ceph
> >     >     work again?
> >     >     >
> >     >     > Thanks a lot.
> >     >     >
> >     >     >
> >     >     >
> >     >
> >     >     I've encountered this (upgrading from Jewel).
> >     >
> >     >     The cause seems to be a busted mgr bootstrap key (see
> >     below). Simply
> >     >     restarting your Ceph mons *should* get you back to functioning
> >     >     (mon has
> >     >     hung as the key is too short), then you can fix the key and
> >     deploy
> >     >     a mgr
> >     >     (here's my example for deploying a mgr on my host ceph1):
> >     >
> >     >     $ sudo ceph auth get client.bootstrap-mgr
> >     >     exported keyring for client.bootstrap-mgr
> >     >     [client.bootstrap-mgr]
> >     >              key = AAAAAAAAAAAAAAAA
> >     >              caps mon = "allow profile bootstrap-mgr"
> >     >
> >     >
> >     >     So destroy and recreate it:
> >     >
> >     >
> >     >     $ sudo ceph auth del client.bootstrap-mgr
> >     >     updated
> >     >
> >     >     $ sudo ceph auth get-or-create client.bootstrap-mgr mon
> >     'allow profile
> >     >     bootstrap-mgr'
> >     >     [client.bootstrap-mgr]
> >     >              key = AQBDenFZW7yKJxAAYlSBQLtDADIzsnfBcdxHpg==
> >     >
> >     >     $ ceph-deploy -v gatherkeys ceph1
> >     >     $ ceph-deploy -v mgr create ceph1
> >     >
> >     >
> >     >     regards
> >     >
> >     >     Mark
> >     >
> >     >
> >     >     _______________________________________________
> >     >     ceph-users mailing list
> >     > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >     <mailto:ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com
> >>
> >     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >     >
> >
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to