Re: [ceph-users] ceph status: pg backfill_toofull, but all OSDs have enough space

2019-08-22 Thread Brad Hubbard
https://tracker.ceph.com/issues/41255 is probably reporting the same issue. On Thu, Aug 22, 2019 at 6:31 PM Lars Täuber wrote: > > Hi there! > > We also experience this behaviour of our cluster while it is moving pgs. > > # ceph health detail > HEALTH_ERR 1 MDSs report slow metadata IOs; Reduced

Re: [ceph-users] ceph status: pg backfill_toofull, but all OSDs have enough space

2019-08-22 Thread Lars Täuber
Hi there! We also experience this behaviour of our cluster while it is moving pgs. # ceph health detail HEALTH_ERR 1 MDSs report slow metadata IOs; Reduced data availability: 2 pgs inactive; Degraded data redundancy (low space): 1 pg backfill_toofull MDS_SLOW_METADATA_IO 1 MDSs report slow

Re: [ceph-users] ceph status: pg backfill_toofull, but all OSDs have enough space

2019-08-21 Thread Reed Dier
Just chiming in to say that I too had some issues with backfill_toofull PGs, despite no OSD's being in a backfill_full state, albeit, there were some nearfull OSDs. I was able to get through it by reweighting down the OSD that was the target reported by ceph pg dump | grep 'backfill_toofull'.

Re: [ceph-users] ceph status showing wrong osd

2018-06-05 Thread Muneendra Kumar M
incorrectly. Let me check this. Regards, Muneendra. *From:* Paul Emmerich [mailto:paul.emmer...@croit.io] *Sent:* Tuesday, June 05, 2018 5:51 PM *To:* Muneendra Kumar M *Cc:* ceph-users *Subject:* Re: [ceph-users] ceph status showing wrong osd It was either created incorrectly (no auth key

Re: [ceph-users] ceph status showing wrong osd

2018-06-05 Thread Paul Emmerich
It was either created incorrectly (no auth key?) or it can't contact the monitor for some reason. The log file should tell you more. Paul 2018-06-05 13:20 GMT+02:00 Muneendra Kumar M : > Hi, > > I have created a cluster and when I run ceph status it is showing me the > wrong number of osds. >

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-26 Thread kevin parrikar
It was a firewall issue on the controller nodes.After allowing ceph-mgr port in iptables everything is displaying correctly.Thanks to people on IRC. Thanks alot, Kevin On Thu, Dec 21, 2017 at 5:24 PM, kevin parrikar wrote: > accidently removed mailing list email > >

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-21 Thread kevin parrikar
accidently removed mailing list email ++ceph-users Thanks a lot JC for looking into this issue. I am really out of ideas. ceph.conf on mgr node which is also monitor node. [global] fsid = 06c5c906-fc43-499f-8a6f-6c8e21807acf mon_initial_members = node-16 node-30 node-31 mon_host = 172.16.1.9

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread Jean-Charles Lopez
Hi Kevin looks like the pb comes from the mgr user itself then. Can you get me the output of - ceph auth list - cat /etc/ceph/ceph.conf on your mgr node Regards JC While moving. Excuse unintended typos. > On Dec 20, 2017, at 18:40, kevin parrikar wrote: > >

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread kevin parrikar
Thanks JC, I tried ceph auth caps client.admin osd 'allow *' mds 'allow *' mon 'allow *' mgr 'allow *' but still status is same,also mgr.log is being flooded with below errors. 2017-12-21 02:39:10.622834 7fb40a22b700 0 Cannot get stat of OSD 140 2017-12-21 02:39:10.622835 7fb40a22b700 0

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread Jean-Charles Lopez
Hi, make sure client.admin user has an MGR cap using ceph auth list. At some point there was a glitch with the update process that was not adding the MGR cap to the client.admin user. JC > On Dec 20, 2017, at 10:02, kevin parrikar wrote: > > hi All, > I have

Re: [ceph-users] ceph status doesnt show available and used disk space after upgrade

2017-12-20 Thread Ronny Aasen
On 20.12.2017 19:02, kevin parrikar wrote: hi All, I have upgraded the cluster from Hammer to Jewel and to Luminous . i am able to upload/download glance images but ceph -s shows 0kb used and Available and probably because of that cinder create is failing. ceph -s   cluster:     id:

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-06-13 Thread Brad Hubbard
working on something else and saw your bug this morning. Cheers, Brad -Original Message- > From: Brad Hubbard <bhubb...@redhat.com> > To: Mathias Buresch <mathias.bure...@de.clara.net> > Cc: jsp...@redhat.com <jsp...@redhat.com>, ceph-us...@ceph.com e...@c

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-06-13 Thread Mathias Buresch
t_enc_bl (cct=, > error="", out=..., key=..., t=) > at auth/cephx/CephxProtocol.h:464 > #2  encode_encrypt (cct=, > error="", > out=..., key=..., t=) > at auth/cephx/CephxProtocol.h:489 > #3  cephx_calc_client_server_challenge (cct=, > secret=...,

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-06-01 Thread Brad Hubbard
, in=..., this=0x7fffea882470) > at auth/cephx/../Crypto.h:110 > #1 encode_encrypt_enc_bl (cct=, > error="", out=..., key=..., t=) > at auth/cephx/CephxProtocol.h:464 > #2 encode_encrypt (cct=, error="", > out=..., key=..., t=) > at auth/cephx/Cep

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-05-31 Thread Mathias Buresch
msg/simple/DispatchQueue.h:103 #10 0x7ffff7bc4182 in start_thread () from /lib/x86_64-linux- gnu/libpthread.so.0 #11 0x778f147d in clone () from /lib/x86_64-linux-gnu/libc.so.6 Best regards Mathias-Original Message- From: Brad Hubbard <bhubb...@redhat.com> To: jsp...@redhat.com

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-05-25 Thread Brad Hubbard
@redhat.com > Cc: ceph-us...@ceph.com > Sent: Thursday, 26 May, 2016 12:57:47 AM > Subject: Re: [ceph-users] Ceph Status - Segmentation Fault > > There wasnt a package ceph-debuginfo available (Maybe bc I am running > Ubuntu). Have installed those: > > * ceph-dbg > * li

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-05-25 Thread Mathias Buresch
x7fffec0555d8) at msg/simple/DispatchQueue.cc:185 #9  0x731023bd in DispatchQueue::DispatchThread::entry (this=) at msg/simple/DispatchQueue.h:103 #10 0x77bc4182 in start_thread () from /lib/x86_64-linux- gnu/libpthread.so.0 #11 0x778f147d in clone () from /lib/x86_64-linux-g

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-05-25 Thread John Spray
witching to Thread 0x7fffea883700 (LWP 23411)] > 0x73141a57 in ?? () from /usr/lib/librados.so.2 > (gdb) bt > #0 0x73141a57 in ?? () from /usr/lib/librados.so.2 > #1 0x7313aff4 in ?? () from /usr/lib/librados.so.2 > #2 0x72fe4a79 in ?? () from /usr/lib/

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-05-25 Thread Mathias Buresch
in clone () from /lib/x86_64-linux-gnu/libc.so.6   Does that help? I cant really see where the error is. :) -Original Message- From: John Spray <jsp...@redhat.com> To: Mathias Buresch <mathias.bure...@de.clara.net> Cc: ceph-us...@ceph.com <ceph-us...@ceph.com> Subj

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-05-25 Thread John Spray
On Mon, May 23, 2016 at 12:41 PM, Mathias Buresch wrote: > Please found the logs with higher debug level attached to this email. You've attached the log from your mon, but it's not your mon that's segfaulting, right? You can use normal ceph command line flags to

Re: [ceph-users] Ceph Status - Segmentation Fault

2016-05-24 Thread Brad Hubbard
/usr/bin/ceph is a python script so it's not segfaulting but some binary it's launching is and there doesn't appear to be much information about it in the log you uploaded. Are you able to capture a core file and generate a stack trace from gdb? The following may help to get some data. $ ulimit

Re: [ceph-users] Ceph status

2015-01-06 Thread Lincoln Bryant
Hi Ajitha, For one, it looks like you don't have enough OSDs for the number of replicas you have specified in the config file. What is the value of your 'osd pool default size' in ceph.conf? If it's 3, for example, then you need to have at least 3 hosts with 1 OSD each (with the default

Re: [ceph-users] ceph status 104 active+degraded+remapped 88 creating+incomplete

2014-10-31 Thread Robert Sander
On 29.10.2014 18:29, Thomas Alrin wrote: Hi all, I'm new to ceph. What is wrong in this ceph? How can i make status to change HEALTH_OK? Please help With the current default pool size of 3 and the default crush rule you need at least 3 OSDs on separate nodes for a new ceph cluster to

Re: [ceph-users] ceph --status Missing keyring

2014-08-14 Thread John Wilkins
Dan, Do you have /etc/ceph/ceph.client.admin.keyring, or is that in a local directory? Ceph will be looking for it in the /etc/ceph directory by default. See if adding read permissions works, e.g., sudo chmod +r. You can also try sudo when executing ceph. On Wed, Aug 6, 2014 at 6:55 AM,