[ceph-users] Re: [EXTERNAL] Re: RGW Bucket Notifications and MultiPart Uploads

2022-07-19 Thread Yuval Lifshitz
yes, that would work. you would get a "404" until the object is fully uploaded. On Tue, Jul 19, 2022 at 6:00 PM Mark Selby wrote: > If you can that would be great. As it is likely to be a while before we > make the 16 -> 17 switch. > > > > Question: If I receive the 1st put notification for the

[ceph-users] Re: CephFS standby-replay has more dns/inos/dirs than the active mds

2022-07-19 Thread Patrick Donnelly
You're probably seeing this bug: https://tracker.ceph.com/issues/48673 Sorry I've not had time to finish a fix for it yet. Hopefully soon... On Tue, Jul 19, 2022 at 5:43 PM Bryan Stillwell wrote: > > We have a cluster using multiple filesystems on Pacific (16.2.7) and even > though we have

[ceph-users] Re: Quincy recovery load

2022-07-19 Thread Daniel Williams
Do you think maybe you should issue an immediate change/patch/update to quincy to change the default to wpq? Given the cluster ending nature of the problem? On Wed, Jul 20, 2022 at 4:01 AM Sridhar Seshasayee wrote: > Hi Daniel, > > > And further to my theory about the spin lock or similar,

[ceph-users] Re: Quincy recovery load

2022-07-19 Thread Sridhar Seshasayee
Hi Daniel, And further to my theory about the spin lock or similar, increasing my > recovery by 4-16x using wpq sees my cpu rise to 10-15% ( from 3% )... > but using mclock, even at very very conservative recovery settings sees a > median CPU usage of some multiple of 100% (eg. a multiple of a

[ceph-users] Re: rh8 krbd mapping causes no match of type 1 in addrvec problem decoding monmap, -2

2022-07-19 Thread Wesley Dillingham
Thanks. Interestingly the older kernel did not have a problem with it but the newer kernel does. Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Tue, Jul 19, 2022 at 3:35 PM Ilya Dryomov wrote: > On Tue, Jul 19, 2022 at 9:12

[ceph-users] Re: rh8 krbd mapping causes no match of type 1 in addrvec problem decoding monmap, -2

2022-07-19 Thread Ilya Dryomov
On Tue, Jul 19, 2022 at 9:12 PM Wesley Dillingham wrote: > > > from ceph.conf: > > mon_host = 10.26.42.172,10.26.42.173,10.26.42.174 > > map command: > rbd --id profilerbd device map win-rbd-test/originalrbdfromsnap > > [root@a2tlomon002 ~]# ceph mon dump > dumped monmap epoch 44 > epoch 44 >

[ceph-users] Re: rh8 krbd mapping causes no match of type 1 in addrvec problem decoding monmap, -2

2022-07-19 Thread Wesley Dillingham
from ceph.conf: mon_host = 10.26.42.172,10.26.42.173,10.26.42.174 map command: rbd --id profilerbd device map win-rbd-test/originalrbdfromsnap [root@a2tlomon002 ~]# ceph mon dump dumped monmap epoch 44 epoch 44 fsid 227623f8-b67e-4168-8a15-2ff2a4a68567 last_changed 2022-05-18 15:35:39.385763

[ceph-users] Re: Quincy recovery load

2022-07-19 Thread Daniel Williams
Just incase people don't know osd_op_queue = "wpq" requires an OSD restart. And further to my theory about the spin lock or similar, increasing my recovery by 4-16x using wpq sees my cpu rise to 10-15% ( from 3% )... but using mclock, even at very very conservative recovery settings sees a median

[ceph-users] Re: rh8 krbd mapping causes no match of type 1 in addrvec problem decoding monmap, -2

2022-07-19 Thread Ilya Dryomov
On Tue, Jul 19, 2022 at 5:01 PM Wesley Dillingham wrote: > > I have a strange error when trying to map via krdb on a RH (alma8) release > / kernel 4.18.0-372.13.1.el8_6.x86_64 using ceph client version 14.2.22 > (cluster is 14.2.16) > > the rbd map causes the following error in dmesg: > > [Tue

[ceph-users] Re: Single vs multiple cephfs file systems pros and cons

2022-07-19 Thread Patrick Donnelly
On Fri, Jul 15, 2022 at 1:46 PM Vladimir Brik wrote: > > Hello > > When would it be a good idea to use multiple smaller cephfs > filesystems (in the same cluster) instead a big single one > with active-active MDSs? > > I am migrating about 900M files from Lustre to Ceph and I am > wondering if I

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-07-19 Thread Ilya Dryomov
On Tue, Jul 19, 2022 at 5:10 PM Peter Lieven wrote: > > Am 24.06.22 um 16:13 schrieb Peter Lieven: > > Am 23.06.22 um 12:59 schrieb Ilya Dryomov: > >> On Thu, Jun 23, 2022 at 11:32 AM Peter Lieven wrote: > >>> Am 22.06.22 um 15:46 schrieb Josh Baergen: > Hey Peter, > > > I found

[ceph-users] Re: rh8 krbd mapping causes no match of type 1 in addrvec problem decoding monmap, -2

2022-07-19 Thread Wesley Dillingham
Tried with rh8/14.2.16 package version and same issue. dmesg shows the error in email subject, stdout shows: rbd: map failed: (110) Connection timed out Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Tue, Jul 19, 2022 at 11:00 AM

[ceph-users] Re: librbd leaks memory on crushmap updates

2022-07-19 Thread Peter Lieven
Am 24.06.22 um 16:13 schrieb Peter Lieven: Am 23.06.22 um 12:59 schrieb Ilya Dryomov: On Thu, Jun 23, 2022 at 11:32 AM Peter Lieven wrote: Am 22.06.22 um 15:46 schrieb Josh Baergen: Hey Peter, I found relatively large allocations in the qemu smaps and checked the contents. It contained

[ceph-users] Re: [EXTERNAL] Re: RGW Bucket Notifications and MultiPart Uploads

2022-07-19 Thread Mark Selby
If you can that would be great. As it is likely to be a while before we make the 16 -> 17 switch. Question: If I receive the 1st put notification for the partial object and I check RGW to see if the object actually exists – do you know if RGW will tell me that it is missing until the

[ceph-users] rh8 krbd mapping causes no match of type 1 in addrvec problem decoding monmap, -2

2022-07-19 Thread Wesley Dillingham
I have a strange error when trying to map via krdb on a RH (alma8) release / kernel 4.18.0-372.13.1.el8_6.x86_64 using ceph client version 14.2.22 (cluster is 14.2.16) the rbd map causes the following error in dmesg: [Tue Jul 19 07:45:00 2022] libceph: no match of type 1 in addrvec [Tue Jul 19

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-19 Thread Ali Akil
We would need something similar to etcd to track the state of the services. The spec configuration files should always mirror the state of the Ceph cluster. On 19.07.22 14:02, Redouane Kachach Elhichou wrote: Hi Luis, I'm not aware of any option on the specs to remove config entries. I'm

[ceph-users] Re: Can't setup Basic Ceph Client

2022-07-19 Thread Jean-Marc FONTANA
Hello Iban, Thanks for your answering ! We finally managed to connect with the admin keyring and we think that is not the best practice.  We shall try your conf and get you advised of the result. Best regards JM Le 19/07/2022 à 11:08, Iban Cabrillo a écrit : Hi Jean, If you do not

[ceph-users] Re: Can't setup Basic Ceph Client

2022-07-19 Thread Jean-Marc FONTANA
Hello, Thanks for your answering ! We finally managed to connect with the admin keyring but we think that is not the best practice. A few later after your message, there was another one which indicates a way to get a real client. We shall try it and get you advised of the result. Best

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-19 Thread Redouane Kachach Elhichou
Hi Luis, I'm not aware of any option on the specs to remove config entries. I'm afraid you'd need to do it yourself by using the rm command. Redo. On Tue, Jul 19, 2022 at 1:53 PM Luis Domingues wrote: > Hi, > > Yes we tries that, and it's working. That's the way we do it when I refer > to

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-19 Thread Redouane Kachach Elhichou
Did you try the *rm *option? both ceph config and ceph config-key support removing config kyes: From: https://docs.ceph.com/en/quincy/man/8/ceph/#ceph-ceph-administration-tool ceph config-key [ *rm* | *exists* | *get* | *ls* | *dump* | *set* ] … ceph config [ *dump* | *ls* | *help* | *get* |

[ceph-users] replacing OSD nodes

2022-07-19 Thread Jesper Lykkegaard Karlsen
Hi all, Setup: Octopus - erasure 8-3 I had gotten to the point where I had some rather old OSD nodes, that I wanted to replace with new ones. The procedure was planned like this: * add new replacement OSD nodes * set all OSDs on the retiring nodes to out. * wait for everything to

[ceph-users] Re: crashes after upgrade from octopus to pacific

2022-07-19 Thread Ramin Najjarbashi
ceph version 16.2.9 (4c3647a322c0ff5a1dd2344e039859dcbd28c830) pacific (stable) 1: /lib64/libpthread.so.0(+0x12ce0) [0x7f4558e24ce0] 2: (RGWHandler_REST_S3Website::retarget(RGWOp*, RGWOp**, optional_yield)+0x174) [0x7f4563e02684] 3: (rgw_process_authenticated(RGWHandler_REST*, RGWOp*&,

[ceph-users] crashes after upgrade from octopus to pacific

2022-07-19 Thread Ramin Najjarbashi
Hi My account in "tracker.ceph.com" was not approved after 5 days but anyway I have some problems in Ceph v16.2.7. Accourding this issue [rgw: s3website crashes after upgrade from octopus to pacific](https://tracker.ceph.com/issues/53913), RGW crashed whene s3website calls without subdomain

[ceph-users] Re: Haproxy error for rgw service

2022-07-19 Thread Redouane Kachach Elhichou
Great, thanks for sharing your solution. It would be great if you can open a tracker describing the issue so it could be fixed later in cephadm code. Best, Redo. On Tue, Jul 19, 2022 at 9:28 AM Robert Reihs wrote: > Hi, > I think I found the problem. We are using ipv6 only, and the config

[ceph-users] Re: Can't setup Basic Ceph Client

2022-07-19 Thread Iban Cabrillo
Hi Jean, If you do not want to use the admin user, which is the most logical thing to do, you must create a client with rbd access to the pool on which you are going to perform the I/O actions. For example in our case it is the user cinder: client.cinder key:

[ceph-users] Re: new crush map requires client version hammer

2022-07-19 Thread Iban Cabrillo
HI, Looking deeper at my configuration i see: [root@cephmon03 ~]# ceph osd dump | grep min_compat_client require_min_compat_client firefly min_compat_client hammer It is safe to make: ceph osd set-require-min-compat-client hammer In order to enable straw2? regards I,

[ceph-users] Re: Can't setup Basic Ceph Client

2022-07-19 Thread Kai Stian Olstad
On 08.07.2022 16:18, Jean-Marc FONTANA wrote: We're planning to use rbd too and get block device for a linux server. In order to do that, we installed ceph-common packages and created ceph.conf and ceph.keyring as explained at Basic Ceph Client Setup — Ceph Documentation

[ceph-users] new crush map requires client version hammer

2022-07-19 Thread Iban Cabrillo
Dear cephers, The upgrade has been successful and all cluster elements are running version 14.2.22 (including clients), and right now the cluster is HEALTH_OK, msgr2 is enabled and working properly. Following the upgrade guide from mimic to nautilus

[ceph-users] Re: Quincy recovery load

2022-07-19 Thread Daniel Williams
Also never had problems with backfill / rebalance / recovery but now seen runaway CPU usage even with very conservative recovery settings after upgrading to quincy from pacific. osd_recovery_sleep_hdd = 0.1 osd_max_backfills = 1 osd_recovery_max_active = 1 osd_recovery_delay_start = 600 Tried:

[ceph-users] Re: Haproxy error for rgw service

2022-07-19 Thread Robert Reihs
Hi, I think I found the problem. We are using ipv6 only, and the config cephadm is creating only adds the ipv4 configuration. /etc/sysctl.d/90-ceph-FSID-keepalived.conf # created by cephadm # IP forwarding and non-local bind net.ipv4.ip_forward = 1 net.ipv4.ip_nonlocal_bind = 1 I added:

[ceph-users] Re: RGW error Coundn't init storage provider (RADOS)

2022-07-19 Thread Robert Reihs
Yes, I checked pg_num, pgp_num and mon_max_pg_per_osd. I also setup a single node cluster with the same ansible script we have. Using cephadm for setting um and managing the cluster. I had the same problem on the new single node cluster without setup of any other services. When I created the pools