[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-17 Thread Lokendra Rathour
yes I did, This mentioned command works fine on the node where ceph common is installed. #*sudo mount -t ceph :/volumes/hns/conf/2ee9c2d0-873b-4d04-8c46-4c0da02787b8 /mnt/imgs -o name=foo,secret=AQABDzRkTaJCEhAAC7rC6E68ofwULnx6qX/VDA== -v* parsing options:

[ceph-users] Re: ceph pg stuck - missing on 1 osd how to proceed

2023-04-17 Thread xadhoom76
Thanks, I try to change the pg and pgp number to an higher value but pg do not increase ta: pools: 8 pools, 1085 pgs objects: 242.28M objects, 177 TiB usage: 553 TiB used, 521 TiB / 1.0 PiB avail pgs: 635281/726849381 objects degraded (0.087%)

[ceph-users] Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

2023-04-17 Thread Matthias Ferdinand
Hi, > > Matthias suggest to enable write cache, you suggest to disble it... or i'm > > cache-confused?! ;-) there were some discussions about write cache settings last year, e.g. https://www.spinics.net/lists/ceph-users/msg73263.html https://www.spinics.net/lists/ceph-users/msg69489.html

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-17 Thread Reto Gysi
That was all that it logged. In the meantime I did some further test. I've created a new erasure coded datapool 'ecpool_test' and if I create a new rbd image with this data pool I can create snapshots, but I can't create snapshots on both new and existing images with existing data pool

[ceph-users] metadata sync

2023-04-17 Thread Christopher Durham
Hi, After doing a 'radosgw-admin metadata sync init' on a secondary site in a 2-cluster multisite configuration, (See my previous post on doing a quincy upgade with sts roles manually synced between primary and secondary), and letting it sync all metadata from a master zone, I get the following

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-17 Thread Ilya Dryomov
On Mon, Apr 17, 2023 at 6:37 PM Reto Gysi wrote: > > Hi Ilya, > > Thanks for the reply. Here's is the output: > > root@zephir:~# rbd status ceph-dev > Watchers: >watcher=192.168.1.1:0/338620854 client.19264246 > cookie=18446462598732840969 > > root@zephir:~# rbd snap create

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-17 Thread Reto Gysi
Hi Ilya, Thanks for the reply. Here's is the output: root@zephir:~# rbd status ceph-dev Watchers: watcher=192.168.1.1:0/338620854 client.19264246 cookie=18446462598732840969 root@zephir:~# rbd snap create ceph-dev@backup --debug-ms 1 --debug-rbd 20 2023-04-17T18:23:16.211+0200

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-17 Thread Ilya Dryomov
On Mon, Apr 17, 2023 at 2:01 PM Reto Gysi wrote: > > Dear Ceph Users, > > After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able > to create snapshots > of images that have an erasure coded datapool. > > root@zephir:~# rbd snap create ceph-dev@backup_20230417 > Creating snap:

[ceph-users] Re: [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-17 Thread Chris Palmer
I've just tried this on 17.2.6 and it worked fine On 17/04/2023 12:57, Reto Gysi wrote: Dear Ceph Users, After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able to create snapshots of images that have an erasure coded datapool. root@zephir:~# rbd snap create

[ceph-users] MGR Memory Leak in Restful

2023-04-17 Thread Chris Palmer
We've hit a memory leak in the Manager Restful interface, in versions 17.2.5 & 17.2.6. On our main production cluster the active MGR grew to about 60G until the oom_reaper killed it, causing a successful failover and restart of the failed one. We can then see that the problem is recurring,

[ceph-users] Re: unable to deploy ceph -- failed to read label for XXX No such file or directory

2023-04-17 Thread Radoslav Bodó
when adding OSDs the first host gets created OSDs as expected, but during creating OSDs on second host the output gets wierd, even when adding each device separately the output shows that `ceph orch` tries to create multiple osds at once ``` root@test1:~# for xxx in j k l m; do ceph orch

[ceph-users] [ceph 17.2.6] unable to create rbd snapshots for images with erasure code data-pool

2023-04-17 Thread Reto Gysi
Dear Ceph Users, After upgrading from version 17.2.5 to 17.2.6 I no longer seem to be able to create snapshots of images that have an erasure coded datapool. root@zephir:~# rbd snap create ceph-dev@backup_20230417 Creating snap: 10% complete...failed. rbd: failed to create snapshot: (95)

[ceph-users] RADOSGW zone data-pool migration.

2023-04-17 Thread Gaël THEROND
Hi everyone, quick question regarding radosgw zone data-pool. I’m currently planning to migrate an old data-pool that was created with inappropriate failure-domain to a newly created pool with appropriate failure-domain. If I’m doing something like: radosgw-admin zone modify —rgw-zone default

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-17 Thread Robert Sander
On 14.04.23 12:17, Lokendra Rathour wrote: *mount: /mnt/image: mount point does not exist.* Have you created the mount point? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19

[ceph-users] Re: Radosgw-admin bucket list has duplicate objects

2023-04-17 Thread mahnoosh shahidi
Thanks for your responses, Their names look exactly similar and also have exactly the same object entries such as tag, etag, mtime etc. Best Regards, Mahnoosh On Sat, Apr 15, 2023 at 7:33 PM Ramin Najjarbashi < ramin.najarba...@gmail.com> wrote: > Hi, > > to verify if the bucket names are

[ceph-users] Re: OSDs remain not in after update to v17

2023-04-17 Thread Alexandre Becholey
Hi, Thank you all for your help, I was able to fix the issue (in a dirty way, but it worked). Here is a quick summary of the steps: - create a CentOS 8 Stream VM (I took the cloudimg from https://cloud.centos.org/centos/8-stream/x86_64/images/), to match what the container is using - git

[ceph-users] Re: pacific el7 rpms

2023-04-17 Thread Konstantin Shalygin
On EL7 only Nautilus was present. Pacific was from EL8 k > On 17 Apr 2023, at 11:29, Marc wrote: > > > Is there ever going to be rpms in > > https://download.ceph.com/rpm-pacific/el7/ ___ ceph-users mailing list -- ceph-users@ceph.io To

[ceph-users] Re: CEPH Mirrors are lacking packages

2023-04-17 Thread Oliver Dzombic
Hi, thank you for your hint Burkhard! For de.ceph.com i changed the sync source from eu to us ( download.ceph.com ). So at least de.ceph.com should be in sync with the main source within the next 24h. -- Mit freundlichen Gruessen / Best regards Oliver Dzombic Layer7 Networks

[ceph-users] Troubleshooting cephadm OSDs aborting start

2023-04-17 Thread André Gemünd
Dear Ceph-users, we have trouble with a Ceph cluster after a full shutdown. A couple of OSDs don't start anymore, exiting with SIGABRT very quickly. With debug logs and lots of work (I find cephadm clusters hard to debug btw) we received the following stack trace: debug-16>

[ceph-users] pacific el7 rpms

2023-04-17 Thread Marc
Is there ever going to be rpms in https://download.ceph.com/rpm-pacific/el7/ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: CEPH Mirrors are lacking packages

2023-04-17 Thread Marc
these you are mentioning are missing even the pacific release, so mabye try https://download.ceph.com/ > > > at least eu.ceph.com and de.ceph.com are lacking packages for the > pacific release. All package not start with "c" (e.g. librbd, librados, > radosgw) are missing. >

[ceph-users] CEPH Mirrors are lacking packages

2023-04-17 Thread Burkhard Linke
Hi, at least eu.ceph.com and de.ceph.com are lacking packages for the pacific release. All package not start with "c" (e.g. librbd, librados, radosgw) are missing. Best regards, Burkhard Linke ___ ceph-users mailing list -- ceph-users@ceph.io

[ceph-users] Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

2023-04-17 Thread Frank Schilder
Hi Marco. >> For your disk type I saw "volatile write cache available = yes" on "the >> internet". This looks a bit odd, but maybe these HDDs do have some volatile >> cache. Try to disable it with smartctl and do the benchmark again. > > Sorry, i'm a bit puzzled here. > > Matthias suggest to

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-17 Thread Konstantin Shalygin
Hi, This by the reason of DNS. Something from userland should be provide IP addresses for kernel k Sent from my iPhone > On 17 Apr 2023, at 05:56, Lokendra Rathour wrote: > > Hi Team, > The mount at the client side should be independent of Ceph, but here in > this case of DNS SRV-based