Hi,
Currently running Mimic 13.2.5.
We had reports this morning of timeouts and failures with PUT and GET
requests to our Ceph RGW cluster. I found these messages in the RGW
log:
RGWReshardLock::lock failed to acquire lock on
bucket_name:bucket_instance ret=-16
NOTICE: resharding operation on
This is the output of OSD.270 that remains with slow requests blocked
even after restarting.
What's the interpretation of it?
root@ld5507:~# ceph daemon osd.270 dump_blocked_ops [330/1857]
{
"ops": [
{
"description": "osd_pg_create(e293649 59.b:267033
59.2c:267033)",
I have a 12.2.12 cluster with 3 mons where mgr will be active on 1.
I have noticed that the command "ceph pg dump" hangs on all mons except the
one where the mgr is running.
"ceph pg dump" also runs fine on osd nodes.
Is this expected behavior?
thx
Frank
Requests stuck for > 2 hours cannot be attributed to "IO load on the cluster".
Looks like some OSDs really are stuck, things to try:
* run "ceph daemon osd.X dump_blocked_ops" on one of the affected OSDs
to see what is stuck
* try restarting OSDs to see if it clears up automatically
Paul
--
On Fri, Oct 25, 2019 at 11:14 PM Maged Mokhtar wrote:
> 3. vmotion between Ceph datastore and an external datastore..this will be
> bad. This seems the case you are testing. It is bad because between 2
> different storage systems (iqns are served on different targets), vaai xcopy
> cannot be
On Mon, Oct 28, 2019 at 8:07 PM Mike Christie wrote:
>
> On 10/25/2019 03:25 PM, Ryan wrote:
> > Can you point me to the directions for the kernel mode iscsi backend. I
> > was following these directions
> > https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
> >
>
> If you just wanted to use
Hi,
after enabling ceph balancer (with command ceph balancer on) the health
status changed to error.
This is the current output of ceph health detail:
root@ld3955:~# ceph health detail
HEALTH_ERR 1438 slow requests are blocked > 32 sec; 861 stuck requests
are blocked > 4096 sec; mon ld5505 is low
Hello,
I noticed several error messages in MGR (active) log:
2019-10-31 10:40:16.341 7ff9edd64700 0 auth: could not find secret_id=3714
2019-10-31 10:40:16.341 7ff9edd64700 0 cephx: verify_authorizer could
not get service secret for service mgr secret_id=3714
2019-10-31 10:40:16.541