Hi, Was that cluster ever configured with ssl before? Maybe the initial alertmanager webhook config had the ssl enabled url added to it and its now failing to detect it. Have you tried to redeploy the alertmanager to probably refresh its config?
It might be worth checking the alertmanager configuration to see what urls its pointing to. Also it might help more if you can share some info about your cluster like the version it is in right now. I remember there were some bugs around the standby server and alertmanager configuration but they seem to have been fixed in some version of ceph. Regards, Nizam On Mon, Dec 8, 2025 at 5:22 PM lejeczek via ceph-users <[email protected]> wrote: > Hi guys. > > I've no _ssl_ used by _dashboard_ > > -> $ ceph mgr services > { > "dashboard": "http://10.1.1.63:8080/", > "prometheus": "http://10.1.1.63:9283/" > } > -> $ ceph config get mgr mgr/dashboard/ssl > false > > and the 'standby' servers logs: > ... > Dec 08 11:33:39 > ceph-9f4f9dba-72c7-11f0-8052-525400519d29-alertmanager-podster1[2503]: > ts=2025-12-08T11:33:39.781Z caller=dispatch.go:353 > level=error component=dispatcher msg="Notify for alerts > failed" num_alerts=1 err="ceph-dashboard/webhook[0]: notify > retry canceled after 7 attempts: Post \"<redacted>\": dial > tcp 10.1.1.63:8443: connect: connection refused; > ceph-dashboard/webhook[1]: notify retry canceled after 7 > attempts: Post \"<redacted>\": dial tcp 10.1.1.61:8443: > connect: connection refused" > Dec 08 11:33:39 > ceph-9f4f9dba-72c7-11f0-8052-525400519d29-alertmanager-podster1[2503]: > ts=2025-12-08T11:33:39.781Z caller=notify.go:848 level=warn > component=dispatcher receiver=ceph-dashboard > integration=webhook[0] > aggrGroup="{}/{}:{alertname=\"CephHealthWarning\"}" > msg="Notify attempt failed, will retry later" attempts=1 > err="Post \"<redacted>\": dial tcp 10.1.1.63:8443: connect: > connection refused" > ... > with the snippet of the logs above _ceph_ says only: > -> $ ceph health detail > HEALTH_WARN mon podster1 is low on available space > [WRN] MON_DISK_LOW: mon podster1 is low on available space > mon.podster1 has 12% avail > > nothing listens on 8443 anywhere, which is as it should be - > why standby server wants that - would anybody know? > 10.1.1.61 is standby's own IP. > > many thanks, L. > _______________________________________________ > ceph-users mailing list -- [email protected] > To unsubscribe send an email to [email protected] > -- Nizamudeen A Sr. Software Engineer - IBM Partner Engineer IBM and Red Hat Ceph Storage Red Hat <https://www.redhat.com/> <https://www.redhat.com/> _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
