64 PG's per pool /shouldn't/ cause any issues while there's only 3 OSD's. It'll be something to pay attention to if a lot more get added through.

Your replication setup is probably anything other than host.
You'll want to extract your crush map then decompile it and see if your "step" is set to osd or rack.
If it's not host then change it to that and pull it in again.

Check the docs on crush maps http://ceph.com/docs/master/rados/operations/crush-map/ for more info.

-Michael

On 23/05/2014 10:53, Karan Singh wrote:
Try increasing the placement groups for pools

ceph osd pool set data pg_num 128
ceph osd pool set data pgp_num 128

similarly for other 2 pools as well.

- karan -


On 23 May 2014, at 11:50, [email protected] <mailto:[email protected]> wrote:

Dear ceph,

I am trying to setup ceph 0.80.1 with the following components :

1 x mon - Debian Wheezy (i386)
3 x osds - Debian Wheezy (i386)

(all are kvm powered)

Status after the standard setup procedure :

root@ceph-node2:~# ceph -s
   cluster d079dd72-8454-4b4a-af92-ef4c424d96d8
health HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck unclean monmap e1: 1 mons at {ceph-node1=192.168.123.48:6789/0}, election epoch 2, quorum 0 ceph-node1
    osdmap e11: 3 osds: 3 up, 3 in
     pgmap v18: 192 pgs, 3 pools, 0 bytes data, 0 objects
           103 MB used, 15223 MB / 15326 MB avail
                192 incomplete

root@ceph-node2:~# ceph health
HEALTH_WARN 192 pgs incomplete; 192 pgs stuck inactive; 192 pgs stuck unclean

root@ceph-node2:~# ceph osd tree
# id    weight  type name       up/down reweight
-1      0       root default
-2      0               host ceph-node2
0       0                       osd.0   up      1
-3      0               host ceph-node3
1       0                       osd.1   up      1
-4      0               host ceph-node4
2       0                       osd.2   up      1


root@ceph-node2:~# ceph osd dump
epoch 11
fsid d079dd72-8454-4b4a-af92-ef4c424d96d8
created 2014-05-23 09:00:08.780211
modified 2014-05-23 09:01:33.438001
flags

pool 0 'data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool crash_replay_interval 45 stripe_width 0

pool 1 'metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool stripe_width 0

pool 2 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 owner 0 flags hashpspool stripe_width 0 max_osd 3

osd.0 up in weight 1 up_from 4 up_thru 5 down_at 0 last_clean_interval [0,0) 192.168.123.49:6800/11373 192.168.123.49:6801/11373 192.168.123.49:6802/11373 192.168.123.49:6803/11373 exists,up 21a7d2a8-b709-4a28-bc3b-850913fe4c6b

osd.1 up in weight 1 up_from 8 up_thru 0 down_at 0 last_clean_interval [0,0) 192.168.123.50:6800/10542 192.168.123.50:6801/10542 192.168.123.50:6802/10542 192.168.123.50:6803/10542 exists,up c1cd3ad1-b086-438f-a22d-9034b383a1be

osd.2 up in weight 1 up_from 11 up_thru 0 down_at 0 last_clean_interval [0,0) 192.168.123.53:6800/6962 192.168.123.53:6801/6962 192.168.123.53:6802/6962 192.168.123.53:6803/6962 exists,up aa06d7e4-181c-4d70-bb8e-018b088c5053


What am I doing wrong here ?
Or what kind of additional information should be provided to get troubleshooted.

thanks,

---

Jan

P.S. with emperor 0.72.2 I had no such problems
_______________________________________________
ceph-users mailing list
[email protected] <mailto:[email protected]>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to