Thank you Michael,

            It works like a charm ! I got

    cluster c887f570-bd8d-4aa3-bc63-b8ff959bd03f
     health HEALTH_OK
     monmap e1: 1 mons at {storage=172.16.3.2:6789/0}, election epoch 2, quorum 
0 storage
     osdmap e18: 2 osds: 2 up, 2 in
      pgmap v33: 192 pgs, 3 pools, 0 bytes data, 0 objects
            71260 kB used, 1396 GB / 1396 GB avail
                 192 active+clean

            The only thing embarrasses me is available space. I have 2x750Gb 
drives and the total amount of available space is indeed 1396Gb, but if Ceph 
automatically creates 2 replicas of every object, then this space should be 
divided by 2 isn’t it? The correct number should be ~700Gb

Regards,

Vadim.

From: [email protected] 
[mailto:[email protected]] On Behalf Of Michael
Sent: Tuesday, April 29, 2014 11:10 PM
To: [email protected]
Subject: Re: [ceph-users] Ceph 0.72.2 installation on Ubuntu 12.04.4 LTS never 
got active + сlean

Hi Vadim,

The default distribution rule is now to split over hosts. If you only have one 
host Ceph will not be able to replicate your data.

If you need to replicate within a single host you will have to update your 
crush rules from "step chooseleaf firstn 0 type host" to "step chooseleaf 
firstn 0 type osd" or similar depending on your crush setup.
Please see the documentation for more info 
https://ceph.com/docs/master/rados/operations/crush-map/.

-Michael

On 29/04/2014 21:00, Vadim Kimlaychuk wrote:
Hello all,

            I have tried to install subj. almost 10 times from the beginning 
using official guides:

http://ceph.com/docs/master/start/
            and older one
http://eu.ceph.com/docs/wip-6919/start/quick-start/

            The result is always the same. At the end of installation I got:

<root@storage>/root # ceph -s
     cluster c887f570-bd8d-4aa3-bc63-b8ff959bd03f
     health HEALTH_WARN 91 pgs degraded; 192 pgs stuck unclean
     monmap e1: 1 mons at {storage=172.16.3.2:6789/0}, election epoch 2, quorum 
0 storage
     osdmap e16: 2 osds: 2 up, 2 in
      pgmap v29: 192 pgs, 3 pools, 0 bytes data, 0 objects
            69780 kB used, 1396 GB / 1396 GB avail
                 101 active+remapped
                  91 active+degraded

And
<root@storage>/root # ceph osd tree
# id    weight  type name       up/down reweight
-1      2       root default
-2      2               host storage
0       1                       osd.0   up      1
1       1                       osd.1   up      1


            Or similar in the number of “active + remapped” and “active + 
degradated” number of PGs.  Writing down the object using rados didn’t change 
anything.

            Never got any PG active+clean statuses as guide says. What I am 
doing wrong? Are these guides outdated?

Host configuration is simple: single PC with 3 HDDs – one for OS + journal, 2 
for OSD-s.

BR,

Vadim Kimlaychuk





_______________________________________________

ceph-users mailing list

[email protected]<mailto:[email protected]>

http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to