Hi Juan,
Are the two OSDs that you started with on the same host? I've seen the
same problem, which fixed itself after I added more OSDs on a separate host.
Cheers,
Lincoln
On 11/3/2013 12:09 PM, Juan Vega wrote:
Ceph Users,
I'm trying to create a cluster with 9 OSDs manually (withouth
ceph-deploy). I started with only 2, and will be adding more afterwards.
The problem is that the cluster never finished 'creating', even though
the OSDs are up and in:
vegaj@ServerB2-exper:~/cephcluster$ ceph -c ceph.conf -w
cluster 31843595-d34f-4506-978a-88d44eef9999
health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
monmap e1: 1 mons at {serverB2-exper=192.168.100.2:6789/0}, election
epoch 2, quorum 0 serverB2-exper
osdmap e12: 2 osds: 2 up, 2 in
pgmap v759: 192 pgs: 192 creating; 0 bytes data, 2436 MB used, 206
GB / 220 GB avail
mdsmap e1: 0/0/1 up
2013-11-03 10:02:35.221325 mon.0 [INF] pgmap v759: 192 pgs: 192
creating; 0 bytes data, 2436 MB used, 206 GB / 220 GB avail
2013-11-03 10:04:35.242818 mon.0 [INF] pgmap v760: 192 pgs: 192
creating; 0 bytes data, 2436 MB used, 206 GB / 220 GB avail
I'm not using authentication. My ceph.conf is as follows:
[global]
fsid = 31843595-d34f-4506-978a-88d44eef9999
mon_initial_members = serverB2-exper
mon_host = 192.168.100.2
osd_journal_size = 1024
filestore_xattr_use_omap = true
auth cluster required = none
auth service required = none
auth client required = none
auth supported = none
Am I missing something?
Thanks,
Juan Vega
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com