Yes of course...

iptables -F (no rules) = the same as disabled
SELINUX=disabled

As a testing ground, I use VBox. But I think it should not be a problem.

Firewall? Disable iptables, set SELinux to Permissive.

On 15 Oct, 2014 5:49 pm, "Roman" <intra...@gmail.com <mailto:intra...@gmail.com>> wrote:

    Pascal,

    Here is my latest installation:

        cluster 204986f6-f43c-4199-b093-8f5c7bc641bb
         health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean;
    recovery 20/40 objects degraded (50.000%)
         monmap e1: 2 mons at
    {ceph02=192.168.33.142:6789/0,ceph03=192.168.33.143:6789/0
    <http://192.168.33.142:6789/0,ceph03=192.168.33.143:6789/0>},
    election epoch 4, quorum 0,1 ceph02,ceph03
         mdsmap e4: 1/1/1 up {0=ceph02=up:active}
         osdmap e8: 2 osds: 2 up, 2 in
          pgmap v14: 192 pgs, 3 pools, 1884 bytes data, 20 objects
                68796 kB used, 6054 MB / 6121 MB avail
                20/40 objects degraded (50.000%)
                     192 active+degraded


    host ceph01 - admin
    host ceph02 - mon.ceph02 + osd.1 (sdb, 8G) + mds
    host ceph03 - mon.ceph03 + osd.0 (sdb, 8G)

    $ ceph osd tree
    # id    weight  type name       up/down reweight
    -1      0       root default
    -2      0               host ceph03
    0       0                       osd.0   up      1
    -3      0               host ceph02
    1       0                       osd.1   up      1


    $ ceph osd dump
    epoch 8
    fsid 204986f6-f43c-4199-b093-8f5c7bc641bb
    created 2014-10-15 13:39:05.986977
    modified 2014-10-15 13:40:45.644870
    flags
    pool 0 'data' replicated size 2 min_size 1 crush_ruleset 0
    object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags
    hashpspool crash_replay_interval 45 stripe_width 0
    pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 0
    object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags
    hashpspool stripe_width 0
    pool 2 'rbd' replicated size 2 min_size 1 crush_ruleset 0
    object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags
    hashpspool stripe_width 0
    max_osd 2
    osd.0 up   in  weight 1 up_from 4 up_thru 4 down_at 0
    last_clean_interval [0,0) 192.168.33.143:6800/2284
    <http://192.168.33.143:6800/2284> 192.168.33.143:6801/2284
    <http://192.168.33.143:6801/2284> 192.168.33.143:6802/2284
    <http://192.168.33.143:6802/2284> 192.168.33.143:6803/2284
    <http://192.168.33.143:6803/2284> exists,up
    dccd6b99-1885-4c62-864b-107bd9ba0d84
    osd.1 up   in  weight 1 up_from 8 up_thru 0 down_at 0
    last_clean_interval [0,0) 192.168.33.142:6800/2399
    <http://192.168.33.142:6800/2399> 192.168.33.142:6801/2399
    <http://192.168.33.142:6801/2399> 192.168.33.142:6802/2399
    <http://192.168.33.142:6802/2399> 192.168.33.142:6803/2399
    <http://192.168.33.142:6803/2399> exists,up
    4d4adf4b-ae8e-4e26-8667-c952c7fc4e45

    Thanks,
    Roman

    Hello,

        osdmap e10: 4 osds: 2 up, 2 in

    What about following commands :
    # ceph osd tree
    # ceph osd dump

    You have 2 OSDs on 2 hosts, but 4 OSDs seems to be debined in
    your crush map.

    Regards,

    Pascal

    Le 15 oct. 2014 à 11:11, Roman <intra...@gmail.com
    <mailto:intra...@gmail.com>> a écrit :

    Hi ALL,

    I've created 2 mon and 2 osd on Centos 6.5 (x86_64).

    I've tried 4 times (clean centos installation) but always have
    health: HEALTH_WARN

    Never HEALTH_OK always HEALTH_WARN! :(

    # ceph -s
       cluster d073ed20-4c0e-445e-bfb0-7b7658954874
        health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
        monmap e1: 2 mons at
    {ceph02=192.168.0.142:6789/0,ceph03=192.168.0.143:6789/0
    <http://192.168.0.142:6789/0,ceph03=192.168.0.143:6789/0>},
    election epoch 4, quorum 0,1 ceph02,ceph03
        osdmap e10: 4 osds: 2 up, 2 in
         pgmap v15: 192 pgs, 3 pools, 0 bytes data, 0 objects
               68908 kB used, 6054 MB / 6121 MB avail
                    192 active+degraded

    What am I doing wrong???

    -----------

    host:  192.168.0.141 - admin
    host:  192.168.0.142 - mon.ceph02 + osd.0 (/dev/sdb, 8G)
    host:  192.168.0.143 - mon.ceph03 + osd.1 (/dev/sdb, 8G)

    ceph-deploy version 1.5.18

    [global]
    osd pool default size = 2
    -----------

    Thanks,
    Roman.
    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

    --
    Pascal Morillon
    University of Rennes 1
    IRISA, Rennes, France
    SED
    Offices : E206 (Grid5000), D050 (SED)
    Phone : +33 2 99 84 22 10 <tel:%2B33%202%2099%2084%2022%2010>
    pascal.moril...@irisa.fr <mailto:pascal.moril...@irisa.fr>
    Twitter ‏@pmorillon <https://twitter.com/pmorillon>
    xmpp: pmori...@jabber.grid5000.fr
    <mailto:pmori...@jabber.grid5000.fr>
    http://www.grid5000.fr <http://www.grid5000.fr/>



    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to