your pg acting set is empty,and cluster report i don't have pg that
indicate pg dost not have primary osd.
What are you cluster status when you are create poo?l
Wido den Hollander 于2019年10月30日周三 下午1:30写道:
>
>
>
> On 10/30/19 3:04 AM, soumya tr wrote:
> > Hi all,
> >
> > I have a 3 node ceph
On 10/30/19 3:04 AM, soumya tr wrote:
> Hi all,
>
> I have a 3 node ceph cluster setup using juju charms. ceph health shows
> having inactive pgs.
>
> ---
> /# ceph status
> cluster:
> id: 0e36956e-ef64-11e9-b472-00163e6e01e8
> health: HEALTH_WARN
>
see https://tracker.ceph.com/issues/42515. just ignore the warning for now
On Mon, Oct 7, 2019 at 7:50 AM Nigel Williams
wrote:
>
> Out of the blue this popped up (on an otherwise healthy cluster):
>
> HEALTH_WARN 1 large omap objects
> LARGE_OMAP_OBJECTS 1 large omap objects
> 1 large
On Tue, Oct 29, 2019 at 9:09 PM Jérémy Gardais
wrote:
>
> Thus spake Brad Hubbard (bhubb...@redhat.com) on mardi 29 octobre 2019 à
> 08:20:31:
> > Yes, try and get the pgs healthy, then you can just re-provision the down
> > OSDs.
> >
> > Run a scrub on each of these pgs and then use the
Hi all,
I have a 3 node ceph cluster setup using juju charms. ceph health shows
having inactive pgs.
---
*# ceph status cluster:id: 0e36956e-ef64-11e9-b472-00163e6e01e8
health: HEALTH_WARNReduced data availability: 114 pgs inactive
services:
Ok, assuming my math is right you've got ~14G of data in the mempools.
~6.5GB bluestore data
~1.8GB bluestore onode
~5GB bluestore other
Rest is other misc stuff. That seems to be pretty inline with the
numbers you posted in your screenshot. IE this doesn't appear to be a
leak, but
I did some digging around and yes, it is exactly as you said: systemd files
remained to boot up the previous OSDs. We removed them and now it works
properly. Thank you for the help.
Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.
> Le
On Oct 29, 2019, at 11:23 AM, Jean-Philippe Méthot
wrote:
> A few months back, we had one of our OSD node motherboards die. At the time,
> we simply waited for recovery and purged the OSDs that were on the dead node.
> We just replaced that node and added back the drives as new OSDs. At the
Hi,
A few months back, we had one of our OSD node motherboards die. At the time, we
simply waited for recovery and purged the OSDs that were on the dead node. We
just replaced that node and added back the drives as new OSDs. At the ceph
administration level, everything looks fine, no duplicate
Just to give a short feedback - everything is fine now:
- via ceph-ansible we got some tcmu-runner / ceph-iscsi development versions
- our ISCSI alua setup was a mess (it was a mixture of explicit and implicit
alua while only implicit alua is supported at the moment)
- our multipath devices
Ok looking at mempool, what does it tell me? This affects multiple OSDs, got
crashes almost every hour.
{ "mempool": {
"by_pool": {
"bloom_filter": {
"items": 0,
"bytes": 0
},
"bluestore_alloc": {
Thus spake Brad Hubbard (bhubb...@redhat.com) on mardi 29 octobre 2019 à
08:20:31:
> Yes, try and get the pgs healthy, then you can just re-provision the down
> OSDs.
>
> Run a scrub on each of these pgs and then use the commands on the
> following page to find out more information for each
On 24/10/2019 16:23, Christopher Wieringa wrote:
> Hello all,
>
>
>
> I’ve been using the Ceph kernel modules in Ubuntu to load a CephFS
> filesystem quite successfully for several months. Yesterday, I went
> through a round of updates on my Ubuntu 18.04 machines, which loaded
>
Hello Ceph Users,
I am trialing CephFS / Ganesha NFS for VMWare usage. We are on Mimic / Centos
7.7 / 130 x 12TB 7200rpm OSDs / 13 hosts / 3 replica.
So far the read performance has been great. The write performance ( NFS sync )
hasn't been great. We use a lot of 64KB NFS read / writes and the
14 matches
Mail list logo