[ceph-users] How to submit a bug report ?

2023-03-16 Thread Patrick Vranckx
Hi, I suspect a bug in cephadm to configure ingress service for rgw. Our production server was upgraded from continuously from Luminous to Pacific. When configuring ingress service for rgw, the haproxy.cfg is incomplete. The same yaml file applied on our test cluster does the job. Regards,

[ceph-users] setup problem for ingress + SSL for RGW

2023-02-23 Thread Patrick Vranckx
Hi, Our cluster runs Pacific on Rocky8. We have 3 rgw running on port 7480. I tried to setup an ingress service with a yaml definition of service: no luck service_type: ingress service_id: rgw.myceph.be placement:   hosts:     - ceph001     - ceph002     - ceph003 spec:   backend_service:

[ceph-users] HELP NEEDED : cephadm adopt osd crash

2022-11-08 Thread Patrick Vranckx
Hi, We've already convert two PRODUCTION storage nodes on Octopus to cephadm without problem. On the third one, we succeeded to convert only one OSD. [root@server4 osd]# cephadm adopt --style legacy --name osd.0 Found online OSD at //var/lib/ceph/osd/ceph-0/fsid objectstore_type is bluestore

[ceph-users] TOO_MANY_PGS after upgrade from Nautilus to Octupus

2022-11-08 Thread Patrick Vranckx
Hi, We are currently upgrading our cluster from Nautilus to Octupus. After upgrade of the mons and mgrs, we get warnings about the number of PGS. Which parameter did change during upgrade to explain those new warnings. Nothing else was changed. Is it risky to change the pgs/pool as proposed

[ceph-users] all PG remapped after osd server reinstallation (Pacific)

2022-08-31 Thread Patrick Vranckx
Hi, I use a Ceph test infrastructure with only two storage servers running the OSDs. Objects are replicated between these servers: [ceph: root@cepht001 /]# ceph osd dump | grep 'replicated size' pool 1 '.rgw.root' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32

[ceph-users] ceph orch: list of scheduled tasks

2022-06-07 Thread Patrick Vranckx
Hi, When you change the configuration of your cluster whith 'ceph orch apply ..." or "ceph orch daemon ...", tasks are scheduled: [root@cephc003 ~]# ceph orch apply mgr --placement="cephc001 cephc002 cephc003" Scheduled mgr update... Is there a way to list all the pending tasks ? Regards,

[ceph-users] Unable to deploy new manager in octopus

2022-06-02 Thread Patrick Vranckx
Hi, On my test cluster, I migrated from Nautilus to Octopus and the converted most of the daemons to cephadm. I got a lot of problem with podman 1.6.4 on CentOS 7 through an https proxy because my servers are on a private network. Now, I'm unable to deploy new managers and the cluster is in