On Thu, Aug 29, 2013 at 8:00 AM, Pavel Timoschenkov <
[email protected]> wrote:
> Hi.
> New trouble with ceph-deploy. When i'm executing:
>
> ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
> ceph-deploy osd activate ceph001:sdaa:/dev/sda1
> or
> ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
> ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1
>
Have you tried with
ceph-deploy osd create ceph001:sdaa:/dev/sda1
?
`create` should do `prepare` and `activate` for you. Also be mindful that
the requirements for the arguments
are that you need to pass something like:
HOST:DISK[:JOURNAL]
Where JOURNAL is completely optional, this is also detailed here:
http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create-osds
Have you followed those instructions to deploy your OSDs ?
>
> OSD not created:
>
> ceph -k ceph.client.admin.keyring -s
> cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
> health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
> monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2,
> quorum 0 ceph001
> osdmap e1: 0 osds: 0 up, 0 in
> pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB
> avail
> mdsmap e1: 0/0/1 up
>
> ceph -k ceph.client.admin.keyring osd tree
> # id weight type name up/down reweight
> -1 0 root default
>
> but if i'm creating folder for ceph data and executing:
>
> ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
> ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
>
> Those do not look right to me.
> OSD created:
>
> ceph -k ceph.client.admin.keyring -s
> cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
> health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
> monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2,
> quorum 0 ceph001
> osdmap e5: 1 osds: 1 up, 1 in
> pgmap v6: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB
> avail
> mdsmap e1: 0/0/1 up
>
> ceph -k ceph.client.admin.keyring osd tree
> # id weight type name up/down reweight
> -1 0.03999 root default
> -2 0.03999 host ceph001
> 0 0.03999 osd.0 up 1
>
> This is a bug or should I mount disks for data to some catalog?
>
>
> and more:
> The 'ceph-deploy osd create' construction don't work from me. Only
> 'prepare&activate'.
>
When you say `create` didn't work for you, how so? What output did you see?
Can you share some logs/output?
>
> dpkg -s ceph-deploy
> Version: 1.2.1-1precise****
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com