[ceph-users] Re: ceph pool with a whitespace as name

2021-03-09 Thread Boris Behrens
Ok, i changed the value to "metadata_heap": "", but it is still used. Any ideas how to stop this? Am Mi., 10. März 2021 um 08:14 Uhr schrieb Boris Behrens : > Found it. > [root@s3db1 ~]# radosgw-admin zone get --rgw-zone=eu-central-1 > { > "id": "ff7a8b0c-07e6-463a-861b-78f0adeba8ad", >

[ceph-users] Many OSD marked down after no beacon for XXX seconds, just becauseone MON's OS disk was blocked.

2021-03-09 Thread 912273...@qq.com
Hello,everyone, In the OS disk blocked scene, the Mon service is still running ,but cant't work normally, soon the mon will out of quorum, but some OSD were still markd down after mon_osd_report_timeout*2 seconds, which will cause the cluster unavailable. At this time, the OS is very

[ceph-users] Re: ceph pool with a whitespace as name

2021-03-09 Thread Boris Behrens
Found it. [root@s3db1 ~]# radosgw-admin zone get --rgw-zone=eu-central-1 { "id": "ff7a8b0c-07e6-463a-861b-78f0adeba8ad", "name": "eu-central-1", ...SNIP..., "metadata_heap": " ", "realm_id": "5d6f2ea4-b84a-459b-bce2-bccac338b3ef" } Am Mi., 10. März 2021 um 07:37 Uhr schrieb

[ceph-users] Re: Openstack rbd image Error deleting problem

2021-03-09 Thread Konstantin Shalygin
> On 10 Mar 2021, at 09:50, Norman.Kern wrote: > > I have used Ceph rbd for Openstack for sometime, I met a problem while > destroying a VM. The Openstack tried to > > delete rbd image but failed. I have a test deleting a image by rbd command, > it costs lots of time(image size 512G or

[ceph-users] Openstack rbd image Error deleting problem

2021-03-09 Thread Norman.Kern
Hi Guys, I have used Ceph rbd for Openstack for sometime, I met a problem while destroying a VM. The Openstack tried to delete rbd image but failed. I have a test deleting a image by rbd command, it costs lots of time(image size 512G or more). Anyone met the same problem with me?  Thanks,

[ceph-users] ceph pool with a whitespace as name

2021-03-09 Thread Boris Behrens
Good morning ceph people, I have a pool that got a whitespace as name. And I want to know what creates the pool. I already renamed it, but something recreates the pool. Is there a way to find out what created the pool and what the content ist? When I checked it's content I get [root@s3db1 ~]#

[ceph-users] Re: February 2021 Tech Talk and Code Walk-through

2021-03-09 Thread Mike Perez
Hi everyone, In case you missed the Ceph Tech Talk and Code Walkthrough for February, the recordings are now available to watch: Jason Dillaman | Librbd Part 2: https://www.youtube.com/watch?v=nVjYVmqNClM Sage Weil | What's New In the Pacific Release: https://youtu.be/PVtn53MbxTc On Tue, Feb

[ceph-users] Re: Ceph User Survey Now Available

2021-03-09 Thread Mike Perez
Hi everyone, We are approaching our deadline of April 2nd for the Ceph User Survey to be filled out. Thank you to everyone for the feedback so far. Please send further feedback for this survey here: https://pad.ceph.com/p/user-survey-2021-feedback On Tue, Feb 16, 2021 at 2:20 PM Mike Perez

[ceph-users] PG stuck at active+clean+remapped

2021-03-09 Thread Michael Fladischer
Hi, we have replaced some of our OSDs a while ago an while everything recovery as planned, one PG is still stuck at active+clean+remapped with no backfilling taking place. Mpaaing the PG in question shows me that one OSD is missing: $ ceph pg map 35.1fe osdmap e1265760 pg 35.1fe (35.1fe) ->

[ceph-users] node down pg with backfill_wait waiting for incomplete?

2021-03-09 Thread Marc
I have a node down and pg's are remapping/backfilling. I have also a lot of pg's in backfill_wait. I was wondering if there is a specific order that this is being executed. Eg I have a large'garbage' pool ec21 that is stuck. I could resolve that by changing the min size. However I rather

[ceph-users] Re: Rados gateway basic pools missing

2021-03-09 Thread St-Germain, Sylvain (SSC/SPC)
Ok in the interface when I create a bucket the index in created automatically 1 device_health_metrics 2 cephfs_data 3 cephfs_metadata 4 .rgw.root 5 default.rgw.log 6 default.rgw.control 7 default.rgw.meta 8 default.rgw.buckets.index * I think I just could not make an insertion using s3cmd List

[ceph-users] Rados gateway basic pools missing

2021-03-09 Thread St-Germain, Sylvain (SSC/SPC)
Hi everyone, I just rebuild a (test) cluster using : OS : Ubuntu 20.04.2 LTS CEPH : ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus (stable) 3 nodes : monitor/storage 1. The cluster looks good : # ceph -s cluster: id: 9a89aa5a-1702-4f87-a99c-f94c9f2cdabd

[ceph-users] Replacing disk with xfs on it, documentation?

2021-03-09 Thread Drew Weaver
Hello, I haven't needed to replace a disk in awhile and it seems that I have misplaced my quick little guide on how to do it. When searching the docs it is now recommending that you should use ceph-volume to create OSDs when doing that it creates LV: Disk /dev/sde: 4000.2 GB, 4000225165312

[ceph-users] Re: OSD crashes create_aligned_in_mempool in 15.2.9 and 14.2.16

2021-03-09 Thread Andrej Filipcic
just confirming, crashes are gone with gperftools-libs-2.7-8.el8.x86_64.rpm Cheers, Andrej On 09/03/2021 16:52, Andrej Filipcic wrote: Hi, I was checking that bug yesterday, yes, and it smells the same. I will give a try to the epel one, Thanks Andrej On 09/03/2021 16:44, Dan van der

[ceph-users] DocuBetter Meeting This Week -- 10 Mar 2021 1730 UTC

2021-03-09 Thread John Zachary Dover
This week's meeting will focus on the ongoing rewrite of the cephadm documentation and making certain that the documentation addresses the rough edges in the Pacific release. Meeting: https://bluejeans.com/908675367 Etherpad: https://pad.ceph.com/p/Ceph_Documentation

[ceph-users] Re: Bluestore OSD crash with tcmalloc::allocate_full_cpp_throw_oom in multisite setup with PG_DAMAGED cluster error

2021-03-09 Thread David Orman
For those who aren't on the bug tracker, this was brought up (and has follow-up) here: https://tracker.ceph.com/issues/49618 On Thu, Mar 4, 2021 at 9:55 PM Szabo, Istvan (Agoda) wrote: > > Hi, > > I have a 3 DC multisite setup. > > The replication is directional like HKG->SGP->US so the bucket

[ceph-users] Re: OSD crashes create_aligned_in_mempool in 15.2.9 and 14.2.16

2021-03-09 Thread Andrej Filipcic
Hi, I was checking that bug yesterday, yes, and it smells the same. I will give a try to the epel one, Thanks Andrej On 09/03/2021 16:44, Dan van der Ster wrote: Hi Andrej, I wonder if this is another manifestation of the buggy gperftools-libs v2.8 bug, e.g.

[ceph-users] Re: OSD crashes create_aligned_in_mempool in 15.2.9 and 14.2.16

2021-03-09 Thread Dan van der Ster
Hi Andrej, I wonder if this is another manifestation of the buggy gperftools-libs v2.8 bug, e.g. https://tracker.ceph.com/issues/49618 If so, there is a fixed (downgraded) version in epel-testing now. Cheers, Dan On Tue, Mar 9, 2021 at 4:36 PM Andrej Filipcic wrote: > > > Hi, > > under

[ceph-users] OSD crashes create_aligned_in_mempool in 15.2.9 and 14.2.16

2021-03-09 Thread Andrej Filipcic
Hi, under heavy load our cluster is experiencing frequent OSD crashes. Is this a known bug or should I report it? Any workarounds? It looks to be highly correlated with memory tuning. it happens with both nautilus 14.2.16 and octopus 15.2.9. I have forced the bitmap bluefs and bluestore

[ceph-users] Cephfs metadata and MDS on same node

2021-03-09 Thread Jesper Lykkegaard Karlsen
Dear Ceph’ers I am about to upgrade MDS nodes for Cephfs in the Ceph cluster (erasure code 8+3 ) I am administrating. Since they will get plenty of memory and CPU cores, I was wondering if it would be a good idea to move metadata OSDs (NVMe's currently on OSD nodes together with cephfs_data

[ceph-users] ny way can recover data?

2021-03-09 Thread Elians Wan
we use s3 api upload file to ceph cluster with non versioned bucket, and we override many files then want to recover them, any way can recover? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] 2 Pgs (1x inconsistent, 1x unfound / degraded - unable to fix

2021-03-09 Thread Jeremi Avenant
Good day I'm currently decommissioning a cluster that runs EC3+1 (rack failure domain - with 5 racks), however the cluster still has some production items on it since I'm in the process of moving it to our new EC8+2 cluster. Running Luminous 12.2.13 on Ubuntu 16 HWE, containerized with

[ceph-users] Any way can recover data?

2021-03-09 Thread Elians Wan
we use s3 api upload file to ceph cluster with non versioned bucket, and we override many files then want to recover them, any way can recover? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io