Adding some debug arguments has generated output which I believe indicates the
problem is my keyring is missing, but the keyring seems to be here. Why would
this complain about the keyring and fail to start?
[ceph@joceph08 ceph]$ sudo /usr/bin/radosgw -d --debug-rgw 20 --debug-ms 1 start
2013-11-01 10:59:47.015332 7f83978e4820 0 ceph version 0.67.4
(ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7), process radosgw, pid 18760
2013-11-01 10:59:47.015338 7f83978e4820 -1 WARNING: libcurl doesn't support
curl_multi_wait()
2013-11-01 10:59:47.015340 7f83978e4820 -1 WARNING: cross zone / region
transfer performance may be affected
2013-11-01 10:59:47.018707 7f83978e4820 1 -- :/0 messenger.start
2013-11-01 10:59:47.018773 7f83978e4820 -1 monclient(hunting): ERROR: missing
keyring, cannot use cephx for authentication
2013-11-01 10:59:47.018774 7f83978e4820 0 librados: client.admin
initialization error (2) No such file or directory
2013-11-01 10:59:47.018788 7f83978e4820 1 -- :/1018760 mark_down_all
2013-11-01 10:59:47.018932 7f83978e4820 1 -- :/1018760 shutdown complete.
2013-11-01 10:59:47.018967 7f83978e4820 -1 Couldn't init storage provider
(RADOS)
[ceph@joceph08 ceph]$ sudo service ceph-radosgw status
/usr/bin/radosgw is not running.
[ceph@joceph08 ceph]$ pwd
/etc/ceph
[ceph@joceph08 ceph]$ ls
ceph.client.admin.keyring ceph.conf keyring.radosgw.gateway rbdmap
[ceph@joceph08 ceph]$ cat ceph.client.admin.keyring
[client.admin]
key = AQCYyHJSCFH3BBAA472q80qrAiIIVbvJfK/47A==
[ceph@joceph08 ceph]$ cat keyring.radosgw.gateway
[client.radosgw.gateway]
key = AQBh6nNS0Cu3HxAAMxLsbEYZ3pEbwEBajQb1WA==
caps mon = "allow rw"
caps osd = "allow rwx"
[ceph@joceph08 ceph]$ cat ceph.conf
[client.radosgw.joceph08]
host = joceph08
log_file = /var/log/ceph/radosgw.log
keyring = /etc/ceph/keyring.radosgw.gateway
rgw_socket_path = /tmp/radosgw.sock
[global]
auth_service_required = cephx
filestore_xattr_use_omap = true
auth_client_required = cephx
auth_cluster_required = cephx
mon_host = 10.23.37.142,10.23.37.145,10.23.37.161,10.23.37.165
osd_journal_size = 1024
mon_initial_members = joceph01, joceph02, joceph03, joceph04
fsid = 74d808db-aaa7-41d2-8a84-7d590327a3c7
From: Gruher, Joseph R
Sent: Wednesday, October 30, 2013 12:24 PM
To: [email protected]
Subject: radosgw fails to start, leaves no clues why
Hi all-
Trying to set up object storage on CentOS. I've done this successfully on
Ubuntu but I'm having some trouble on CentOS. I think I have everything
configured but when I try to start the radosgw service it reports starting, but
then the status is not running, with no helpful output as to why on the console
or in the radosgw log. I once experienced a similar problem in Ubuntu when the
hostname was incorrect in ceph.conf but that doesn't seem to be the issue here.
Not sure where to go next. Any suggestions what could be the problem? Thanks!
[ceph@joceph08 ceph]$ sudo service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]
[ceph@joceph08 ceph]$ cat ceph.conf
[joceph08.radosgw.gateway]
keyring = /etc/ceph/keyring.radosgw.gateway
rgw_dns_name = joceph08
host = joceph08
log_file = /var/log/ceph/radosgw.log
rgw_socket_path = /tmp/radosgw.sock
[global]
filestore_xattr_use_omap = true
mon_host = 10.23.37.142,10.23.37.145,10.23.37.161
osd_journal_size = 1024
mon_initial_members = joceph01, joceph02, joceph03
auth_supported = cephx
fsid = 721ea513-e84c-48df-9c8f-f1d9e602b810
[ceph@joceph08 ceph]$ sudo service ceph-radosgw start
Starting radosgw instance(s)...
[ceph@joceph08 ceph]$ sudo service ceph-radosgw status
/usr/bin/radosgw is not running.
[ceph@joceph08 ceph]$ sudo cat /var/log/ceph/radosgw.log
[ceph@joceph08 ceph]$
[ceph@joceph08 ceph]$ sudo cat /etc/ceph/keyring.radosgw.gateway
[client.radosgw.gateway]
key = AQDbUnFSIGT2BxAA5rz9I1HHIG/LJx+XCYot1w==
caps mon = "allow rw"
caps osd = "allow rwx"
[ceph@joceph08 ceph]$ ceph status
cluster 721ea513-e84c-48df-9c8f-f1d9e602b810
health HEALTH_OK
monmap e1: 3 mons at
{joceph01=10.23.37.142:6789/0,joceph02=10.23.37.145:6789/0,joceph03=10.23.37.161:6789/0},
election epoch 8, quorum 0,1,2 joceph01,joceph02,joceph03
osdmap e119: 16 osds: 16 up, 16 in
pgmap v1383: 3200 pgs: 3200 active+clean; 219 GB data, 411 GB used, 10760
GB / 11172 GB avail
mdsmap e1: 0/0/1 up
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com