Re: [ceph-users] Ceph pg in inactive state

2019-10-29 Thread 潘东元
your pg acting set is empty,and cluster report i don't have pg that indicate pg dost not have primary osd. What are you cluster status when you are create poo?l Wido den Hollander 于2019年10月30日周三 下午1:30写道: > > > > On 10/30/19 3:04 AM, soumya tr wrote: > > Hi all, > > > > I have a 3 node ceph

Re: [ceph-users] Ceph pg in inactive state

2019-10-29 Thread Wido den Hollander
On 10/30/19 3:04 AM, soumya tr wrote: > Hi all, > > I have a 3 node ceph cluster setup using juju charms. ceph health shows > having inactive pgs. > > --- > /# ceph status >   cluster: >     id:     0e36956e-ef64-11e9-b472-00163e6e01e8 >     health: HEALTH_WARN >            

Re: [ceph-users] cephfs 1 large omap objects

2019-10-29 Thread Yan, Zheng
see https://tracker.ceph.com/issues/42515. just ignore the warning for now On Mon, Oct 7, 2019 at 7:50 AM Nigel Williams wrote: > > Out of the blue this popped up (on an otherwise healthy cluster): > > HEALTH_WARN 1 large omap objects > LARGE_OMAP_OBJECTS 1 large omap objects > 1 large

Re: [ceph-users] Inconsistents + FAILED assert(recovery_info.oi.legacy_snaps.size())

2019-10-29 Thread Brad Hubbard
On Tue, Oct 29, 2019 at 9:09 PM Jérémy Gardais wrote: > > Thus spake Brad Hubbard (bhubb...@redhat.com) on mardi 29 octobre 2019 à > 08:20:31: > > Yes, try and get the pgs healthy, then you can just re-provision the down > > OSDs. > > > > Run a scrub on each of these pgs and then use the

[ceph-users] Ceph pg in inactive state

2019-10-29 Thread soumya tr
Hi all, I have a 3 node ceph cluster setup using juju charms. ceph health shows having inactive pgs. --- *# ceph status cluster:id: 0e36956e-ef64-11e9-b472-00163e6e01e8 health: HEALTH_WARNReduced data availability: 114 pgs inactive services:

Re: [ceph-users] very high ram usage by OSDs on Nautilus

2019-10-29 Thread Mark Nelson
Ok, assuming my math is right you've got ~14G of data in the mempools. ~6.5GB bluestore data ~1.8GB bluestore onode ~5GB bluestore other Rest is other misc stuff.  That seems to be pretty inline with the numbers you posted in your screenshot. IE this doesn't appear to be a leak, but

Re: [ceph-users] Ceph OSD node trying to possibly start OSDs that were purged

2019-10-29 Thread Jean-Philippe Méthot
I did some digging around and yes, it is exactly as you said: systemd files remained to boot up the previous OSDs. We removed them and now it works properly. Thank you for the help. Jean-Philippe Méthot Openstack system administrator Administrateur système Openstack PlanetHoster inc. > Le

Re: [ceph-users] Ceph OSD node trying to possibly start OSDs that were purged

2019-10-29 Thread Bryan Stillwell
On Oct 29, 2019, at 11:23 AM, Jean-Philippe Méthot wrote: > A few months back, we had one of our OSD node motherboards die. At the time, > we simply waited for recovery and purged the OSDs that were on the dead node. > We just replaced that node and added back the drives as new OSDs. At the

[ceph-users] Ceph OSD node trying to possibly start OSDs that were purged

2019-10-29 Thread Jean-Philippe Méthot
Hi, A few months back, we had one of our OSD node motherboards die. At the time, we simply waited for recovery and purged the OSDs that were on the dead node. We just replaced that node and added back the drives as new OSDs. At the ceph administration level, everything looks fine, no duplicate

Re: [ceph-users] TCMU Runner: Could not check lock ownership. Error: Cannot send after transport endpoint shutdown

2019-10-29 Thread Kilian Ries
Just to give a short feedback - everything is fine now: - via ceph-ansible we got some tcmu-runner / ceph-iscsi development versions - our ISCSI alua setup was a mess (it was a mixture of explicit and implicit alua while only implicit alua is supported at the moment) - our multipath devices

[ceph-users] very high ram usage by OSDs on Nautilus

2019-10-29 Thread Philippe D'Anjou
Ok looking at mempool, what does it tell me? This affects multiple OSDs, got crashes almost every hour. {    "mempool": {     "by_pool": {     "bloom_filter": {     "items": 0,     "bytes": 0     },     "bluestore_alloc": {    

Re: [ceph-users] Inconsistents + FAILED assert(recovery_info.oi.legacy_snaps.size())

2019-10-29 Thread Jérémy Gardais
Thus spake Brad Hubbard (bhubb...@redhat.com) on mardi 29 octobre 2019 à 08:20:31: > Yes, try and get the pgs healthy, then you can just re-provision the down > OSDs. > > Run a scrub on each of these pgs and then use the commands on the > following page to find out more information for each

Re: [ceph-users] CephFS kernel module lockups in Ubuntu linux-image-5.0.0-32-generic?

2019-10-29 Thread Simon Oosthoek
On 24/10/2019 16:23, Christopher Wieringa wrote: > Hello all, > >   > > I’ve been using the Ceph kernel modules in Ubuntu to load a CephFS > filesystem quite successfully for several months.  Yesterday, I went > through a round of updates on my Ubuntu 18.04 machines, which loaded >

[ceph-users] CephFS Ganesha NFS for VMWare

2019-10-29 Thread Glen Baars
Hello Ceph Users, I am trialing CephFS / Ganesha NFS for VMWare usage. We are on Mimic / Centos 7.7 / 130 x 12TB 7200rpm OSDs / 13 hosts / 3 replica. So far the read performance has been great. The write performance ( NFS sync ) hasn't been great. We use a lot of 64KB NFS read / writes and the