On Sat, Apr 5, 2014 at 10:00 AM, Max Kutsevol <[email protected]> wrote:
> Hello!
>
> I am new to ceph, please take that into account.
>
> I'm experimenting with 3mons+2osds setup and got into situation when I
> recreated both of osds.
>
> My pools:
> ceph> osd lspools
>  0 data,1 metadata,
>
> These are just the defaults, I deleted rbd pool, the other two I can't
> delete it says that they are used by CephFS (no mds is running - why
> it's used?)
>
> Cluster status
>
> ceph> status
> cluster 8c3d2e5d-fce9-425b-8028-d2105a9cac3f
> health HEALTH_WARN 128 pgs degraded; 128 pgs stale; 128 pgs stuck stale;
> 128 pgs stuck unclean; 2/2 in osds are down
> monmap e2: 3 mons at
> {mon0=10.1.0.7:6789/0,mon1=10.1.0.8:6789/0,mon2=10.1.0.11:6789/0},
> election epoch 52, quorum 0,1,2 mon0,mon1,mon2
>   osdmap e70: 2 osds: 0 up, 2 in
>    pgmap v129: 128 pgs, 3 pools, 0 bytes data, 0 objects 2784 kB used,
> 36804 MB / 40956 MB avail 128 stale+active+degraded
>
>
> Effectively there is no data for that PGs. I formatted it myself. How
> can I tell ceph that there is no way to get that data back and it should
> forget about that PGs and go on?

Look in the docs (ceph.com/docs) for the "lost" commands. However,
once you've killed all the OSDs in a cluster there's basically no
point to keeping the "cluster" around; you should just wipe it and
start over again.

> Also, how can I delete 'data' and 'metadata' pools or they are need for
> some internal stuff (I won't use mds).

Hmm, I think we inadvertently made this impossible. I've made a bug:
http://tracker.ceph.com/issues/8010
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to