Hi Jens,

I have just installed a single node Debian 12 Ceph cluster (Reef v18.2.7) and I 
haven't been able to reproduce the 'No module named sklearn.svm.classes' issue 
that you're facing.

However, I have found another issue [1], and the module crashes from time to 
time. This has me wondering if there's much to expect from this module anymore, 
considering the last commit was 4 years ago.

Regards,
Frédéric.

[1] Setting any values for mgr/diskprediction_local/predict_interval causes the 
module to fail with the error 'unsupported operand type(s) for %: 'float' and 
'str'' due to config-key settings not being cast properly to int/float in the 
module. Changing line 90 of /usr/share/ceph/mgr/diskprediction_local/module.py 
to 'predicted_frequency = float(self.predict_interval) or 86400' and restarting 
the MGR fixes the issue.


----- Le 6 Mai 25, à 19:34, Jens Galsgaard j...@gitservice.dk a écrit :

> Hi Frédéric.
> 
> I didn't see the link before.
> 
> I am using this image:
> quay.io/ceph/ceph@sha256:1607a746adb9332f71b42e98768e8a16ed96e71c1449794fcece9f6ada16b140
> 
> I see from inside the mgr container that it is built with centos 9 stream.
> 
> The GUI says: 18.2.6 (ff498e17d264a1a4d588c361cbce9cc65afa2327) reef (stable)
> 
> The system was installed with cephadm
> 
> Venlig hilsen - Mit freundlichen Grüßen - Kind Regards,
> Jens Galsgaard
> 
> -----Oprindelig meddelelse-----
> Fra: Frédéric Nass <frederic.n...@univ-lorraine.fr>
> Sendt: Friday, 25 April 2025 09.01
> Til: Jens Galsgaard <j...@gitservice.dk>
> Cc: ceph-users <ceph-users@ceph.io>
> Emne: Re: [ceph-users] Re: failing to enable disk failure prediction
> 
> Hi Jens,
> 
> I suppose you've seen this [1].
> 
> sklearn was added to quay.io ceph container image as a package installed from
> Kefu's third party repo added to the image as
> /etc/yum.repos.d/_copr:copr.fedorainfracloud.org:tchaikov:python-scikit-learn.repo.
> 
> Can you share which container image you're using on Debian Bookworm? Where 
> it's
> pulled from? a 'podman ps' or 'docker ps' should tell you that. If the Ceph
> container image you're using is based on Debian, it could be that the python
> scikit learn package was not build / installed inside the Debian based
> container image.
> 
> Regards,
> Frédéric.
> 
> [1] https://github.com/ceph/ceph-container/pull/1821
> 
> ----- Le 21 Avr 25, à 19:07, Jens Galsgaard j...@gitservice.dk a écrit :
> 
>> Upgraded to 18.2.6 today and the module is still missing from the MGR 
>> container.
>> 
>> Is this the right place to write about this or is there a better channel?
>> 
>> Venlig hilsen - Mit freundlichen Grüßen - Kind Regards, Jens Galsgaard
>> 
>> Gitservice.dk
>> Mob: +45 28864340
>> 
>> 
>> -----Oprindelig meddelelse-----
>> Fra: Jens Galsgaard <j...@gitservice.dk>
>> Sendt: Monday, 14 April 2025 08.59
>> Til: ceph-users@ceph.io
>> Emne: [ceph-users] failing to enable disk failure prediction
>> 
>> Hello,
>> 
>> I’ve a cluster built with cephadm running on Debian 12/Bookworm.
>> Ceph 18.2.5.
>> 
>> I want to enable disk failure prediction and run this command:
>> 
>> ceph mgr module enable diskprediction_local
>> 
>> Then the cluster goes into ERROR state and the logs shows:
>> 
>> 2025-04-14T08:55:37.073118+0200 mgr.host01.eqvsde [ERR] Unhandled
>> exception from module 'diskprediction_local' while running on
>> mgr.host01.eqvsde: No module named 'sklearn.svm.classes'
>> 2025-04-14T08:55:38.388289+0200 mon.host03 [ERR] Health check failed:
>> Module 'diskprediction_local' has failed: No module named 
>> 'sklearn.svm.classes'
>> (MGR_MODULE_ERROR)
>> 
>> How to add sklearn to the container as it is obviously missing?
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
>> email to ceph-users-le...@ceph.io
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> > email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to