Re: [ceph-users] stuck unclean since forever

2016-06-23 Thread
data across 3 different >> _host_ by default. >> >> Regards, >> Burkhard >> >> ___ >> ceph-users mailing list >> ceph-users@lists.ceph.com >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >&g

Re: [ceph-users] Inconsistent PGs

2016-06-22 Thread
ting min_size to 1 yet (we treat is as a last > resort). > >>>> > >>>> > >>>> > >>>> Some cluster info: > >>>> # ceph --version > >>>> > >>>> ceph version 0.94.6 (e832001feaf8c176593e032

Re: [ceph-users] cluster ceph -s error

2016-06-20 Thread
(25.000%) >> > crush map has legacy tunables (require argonaut, min is >> firefly) >> > crush map has straw_calc_version=0 >> > monmap e1: 1 mons at {nodeB=155.232.195.4:6789/0} >> > election epoch 7, quorum 0 nodeB >> > osdmap e80: 10 o

Re: [ceph-users] [ceph-mds] mds service can not start after shutdown in 10.1.0

2016-04-11 Thread
t;0"? Usually people set the ID to the > hostname. Check it in /var/lib/ceph/mds > > John > > On Mon, Apr 11, 2016 at 9:44 AM, 施柏安 <desmon...@inwinstack.com> wrote: > >> Hi cephers, >> >> I was testing CephFS's HA. So I shutdown the active mds server.

Re: [ceph-users] 1 pg stuck

2016-03-24 Thread
If Ceph cluster stuck in recovery state? Did you try command "ceph pg repair " or "ceph pg query" to trace its state? 2016-03-24 22:36 GMT+08:00 yang sheng : > Hi all, > > I am testing the ceph right now using 4 servers with 8 OSDs (all OSDs are > up and in). I have 3

Re: [ceph-users] Need help for PG problem

2016-03-23 Thread
​It seems that you only have two host in your crush map. But the default ruleset would separate the object by host. If you set size 3 for pools, then there would be one object can't build ​because you only have two hosts. 2016-03-23 20:17 GMT+08:00 Zhang Qiang : > And

Re: [ceph-users] Fresh install - all OSDs remain down and out

2016-03-21 Thread
> root@bd-a:/etc/ceph# > > How should i change It? > I never had to edit anything in this area in former versions of ceph. Has > something changed? > Is any new parameter nessessary in ceph.conf while installing? > > Thank you, > Markus > > Am 21.03.2016 um 10:34

Re: [ceph-users] [cephfs] About feature 'snapshot'

2016-03-19 Thread
jsp...@redhat.com>: > On Fri, Mar 18, 2016 at 1:33 AM, 施柏安 <desmon...@inwinstack.com> wrote: > > Hi John, > > How to set this feature on? > > ceph mds set allow_new_snaps true --yes-i-really-mean-it > > John > > > Thank you > > > > 2016-03-17 21:41 GMT

Re: [ceph-users] [cephfs] About feature 'snapshot'

2016-03-19 Thread
er/cephfs/early-adopters/#most-stable-configuration > > Which makes me wonder if we ought to be hiding the .snaps directory > entirely in that case. I haven't previously thought about that, but it > *is* a bit weird. > -Greg > > > > > John > > > > On Thu, Mar

[ceph-users] [cephfs] About feature 'snapshot'

2016-03-19 Thread
Hi all, I encounter a trouble about cephfs sanpshot. It seems that the folder '.snap' is exist. But I use 'll -a' can't let it show up. And I enter that folder and create folder in it, it showed something wrong to use snapshot. Please check : http://imgur.com/elZhQvD