[ceph-users] Re: reef 18.2.2 (hot-fix) QE validation status

2024-03-06 Thread Redouane Kachach
Looks good to me. Testing went OK without any issues. Thanks, Redo. On Tue, Mar 5, 2024 at 5:22 PM Travis Nielsen wrote: > Looks great to me, Redo has tested this thoroughly. > > Thanks! > Travis > > On Tue, Mar 5, 2024 at 8:48 AM Yuri Weinstein wrote: > >> Details of this release are

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-15 Thread Redouane Kachach
t; On Mon, Nov 13, 2023 at 12:14 PM Yuri Weinstein >> wrote: >> >> >> >> Redouane >> >> >> >> What would be a sufficient level of testing (tautology suite(s)) >> >> assuming this PR is approved to be added? >> >> >> &g

[ceph-users] Re: reef 18.2.1 QE Validation status

2023-11-15 Thread Redouane Kachach
Hi Yuri, I've just backported to reef several fixes that I introduced in the last months for the rook orchestrator. Most of them are fixes for dashboard issues/crashes that only happen on Rook environments. The PR [1] has all the changes and it was merged into reef this morning. We really need

[ceph-users] Re: Seeking feedback on Improving cephadm bootstrap process

2023-05-31 Thread Redouane Kachach
e. So my > answer to "how do I start over" would be "go figure it out, its an > important lesson". > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > > From:

[ceph-users] Seeking feedback on Improving cephadm bootstrap process

2023-05-26 Thread Redouane Kachach
Dear ceph community, As you are aware, cephadm has become the default tool for installing Ceph on bare-metal systems. Currently, during the bootstrap process of a new cluster, if the user interrupts the process manually or if there are any issues causing the bootstrap process to fail, cephadm

[ceph-users] Re: [EXTERNAL] Re: Can't connect to MDS admin socket after updating to cephadm

2022-11-16 Thread Redouane Kachach Elhichou
Normally it should work, another way to do it is basically by just entering the container by using podman commands (or docker). For this, just run: > podman ps | grep mds | awk '{print $1}' (to get the container ID) > podman exec -it /bin/sh That should work if the container is running.

[ceph-users] Re: How to ... alertmanager and prometheus

2022-11-08 Thread Redouane Kachach Elhichou
/latest/configuration/configuration/#http_sd_config <https://prometheus.io/docs/prometheus/2.28/configuration/configuration/#http_sd_config> On Tue, Nov 8, 2022 at 4:47 PM Eugen Block wrote: > I somehow missed the HA part in [1], thanks for pointing that out. > > > Zitat vo

[ceph-users] Re: How to ... alertmanager and prometheus

2022-11-08 Thread Redouane Kachach Elhichou
If you are running quincy and using cephadm then you can have more instances of prometheus (and other monitoring daemons) running in HA mode by increasing the number of daemons as in [1]: from a cephadm shell (to run 2 instances of prometheus and altertmanager): > ceph orch apply prometheus

[ceph-users] Re: setting unique labels in cephadm installed (pacific) prometheus.yml

2022-10-25 Thread Redouane Kachach Elhichou
Currently the generated template is the same for all the hosts and there's no way to have a dedicated template for a specific host AFAIK. On Tue, Oct 25, 2022 at 12:45 PM Lasse Aagren wrote: > The context provided, when parsing the template: > > >

[ceph-users] Re: 17.2.4: mgr/cephadm/grafana_crt is ignored

2022-10-05 Thread Redouane Kachach Elhichou
Glad it helped you to fix the issue. I'll open a tracker to fix the docs. On Wed, Oct 5, 2022 at 3:52 PM E Taka <0eta...@gmail.com> wrote: > Thanks, Redouane, that helped! The documentation should of course also be > updated in this context. > > Am Mi., 5. Okt. 2022 um 15:33 Uh

[ceph-users] Re: 17.2.4: mgr/cephadm/grafana_crt is ignored

2022-10-05 Thread Redouane Kachach Elhichou
Hello, As of this PR https://github.com/ceph/ceph/pull/47098 grafana cert/key are now stored per-node. So instead of *mgr/cephadm/grafana_crt* they are stored per-nodee as: *mgr/cephadm/{hostname}/grafana_crt* *mgr/cephadm/{hostname}/grafana_key* In order to see the config entries that have

[ceph-users] Re: Haproxy error for rgw service

2022-07-21 Thread Redouane Kachach Elhichou
Great, thank you. Best, Redo. On Thu, Jul 21, 2022 at 2:01 PM Robert Reihs wrote: > Bug Reported: > https://tracker.ceph.com/issues/56660 > Best > Robert Reihs > > On Tue, Jul 19, 2022 at 11:44 AM Redouane Kachach Elhichou < > rkach...@redhat.com> wrote: > >

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-19 Thread Redouane Kachach Elhichou
> On Tuesday, July 19th, 2022 at 13:47, Redouane Kachach Elhichou < > rkach...@redhat.com> wrote: > > > > Did you try the *rm *option? both ceph config and ceph config-key support > > removing config kyes: > > > > From: > > > https://docs.ceph.com/en/qu

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-19 Thread Redouane Kachach Elhichou
? > > Best, > > Luis Domingues > Proton AG > > > --- Original Message --- > On Friday, July 15th, 2022 at 17:06, Redouane Kachach Elhichou < > rkach...@redhat.com> wrote: > > > > This section could be added to any service spec. cephadm will

[ceph-users] Re: Haproxy error for rgw service

2022-07-19 Thread Redouane Kachach Elhichou
Great, thanks for sharing your solution. It would be great if you can open a tracker describing the issue so it could be fixed later in cephadm code. Best, Redo. On Tue, Jul 19, 2022 at 9:28 AM Robert Reihs wrote: > Hi, > I think I found the problem. We are using ipv6 only, and the config

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-15 Thread Redouane Kachach Elhichou
. > > Best Regards, > Ali > On 15.07.22 15:21, Redouane Kachach Elhichou wrote: > > Hello Ali, > > You can set configuration by including a config section in our yaml as > following: > > config: > param_1: val_1 > ... > param_N:

[ceph-users] Re: [cephadm] ceph config as yaml

2022-07-15 Thread Redouane Kachach Elhichou
Hello Ali, You can set configuration by including a config section in our yaml as following: config: param_1: val_1 ... param_N: val_N this is equivalent to call the following ceph cmd: > ceph config set Best Regards, Redo. On Fri, Jul 15, 2022 at 2:45 PM Ali Akil

[ceph-users] Re: Conversion to Cephadm

2022-06-27 Thread Redouane Kachach Elhichou
>From the error message: 2022-06-25 21:51:59,798 7f4748727b80 INFO /usr/bin/ceph-mon: stderr too many arguments: [--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true] it seems that you are not using the cephadm that corresponds to your ceph version. Please, try to get

[ceph-users] Re: Troubleshooting cephadm - not deploying any daemons

2022-06-09 Thread Redouane Kachach Elhichou
To see what cephadm is doing you can check both the logs on: */var/log/ceph/cephadm.log* (here you can see what the cephadm running on each host is doing) and you can also check what the cephadm (mgr module) is doing by checking the logs of the mgr container by: > podman logs -f `podman ps | grep

[ceph-users] Re: cannot assign requested address

2022-05-26 Thread Redouane Kachach Elhichou
Hello Dmitriy, You have to provide a valid ip during the bootstrap: --mon-ip ** * *must be a valid ip from some interface on the current node. Regards, Redouane. On Thu, May 26, 2022 at 2:14 AM Dmitriy Trubov wrote: > Hi, > > I'm trying to install ansible octopus with cephadm. > > Here is