I upgraded a cluster to 14.2.1 a month ago and installed the
ceph-mgr-dashboard so I could see and use the new dashboard. I then
upgraded the cluster again from 14.2.1 to 14.2.1 this past week and after
clearing all the usual notifications and upgrading the dashboard via apt, I
tried to enable the dashboard again, but now the health check shows an
error. I also had to use the force function:
Error Message:
MGR_MODULE_DEPENDENCY Module 'dashboard' has failed dependency: 'module'
object has no attribute 'PROTOCOL_SSLv3'
Module 'dashboard' has failed dependency: 'module' object has no
attribute 'PROTOCOL_SSLv3'
I checked openssl and I am on version 1.0.2g ( march 2016 ) on all three
monitor nodes. The monitor nodes are running 16.04 with all packages
updated. The open ssl site is showing 1.0.2s, so the version difference is
small, but there is one. It appears Ubuntu wants us to go to 18, but I
would rather not do that. All of the osd host servers are CentOS( we are
moving to centos from Ubuntu, slowly ).
I used "ceph mgr module disable dashboard" to disable the module, but the
error message persists. I tried to enable the module with force, then
disable SSL with "ceph config set mgr mgr/dashboard/ssl false" but that
resulted in "Error EINVAL: unrecognized config option 'mgr/dashboard/ssl'".
Not sure if this will help(shows ssl disabled):
ceph config-key dump
{
"config-history/1/": "<<< binary blob of length 12 >>>",
"config-history/2/": "<<< binary blob of length 12 >>>",
"config-history/2/+mgr/mgr/devicehealth/enable_monitoring": "true",
"config-history/3/": "<<< binary blob of length 12 >>>",
"config-history/3/+mgr/mgr/dashboard/ssl": "false",
"config-history/4/": "<<< binary blob of length 12 >>>",
"config-history/4/+mgr/mgr/dashboard/ssl": "true",
"config-history/4/-mgr/mgr/dashboard/ssl": "false",
"config-history/5/": "<<< binary blob of length 12 >>>",
"config-history/5/+mgr/mgr/dashboard/ssl": "false",
"config-history/5/-mgr/mgr/dashboard/ssl": "true",
"config-history/6/": "<<< binary blob of length 12 >>>",
"config-history/6/+mgr/mgr/dashboard/RGW_API_ACCESS_KEY": "",
"config-history/7/": "<<< binary blob of length 12 >>>",
"config-history/7/+mgr/mgr/dashboard/RGW_API_SECRET_KEY": "",
"config/mgr/mgr/dashboard/RGW_API_ACCESS_KEY": "",
"config/mgr/mgr/dashboard/RGW_API_SECRET_KEY": "",
"config/mgr/mgr/dashboard/ssl": "false",
"config/mgr/mgr/devicehealth/enable_monitoring": "true",
"mgr/dashboard/accessdb_v1": "{\"version\": 1, \"users\": {\"ceph\":
{\"username\": \" \", \"lastUpdate\": 1560736662, \"name\": null, \"roles\":
[\"administrator\"], \"password\": \" \", \"email\": null}}, \"roles\":
{}}",
"mgr/dashboard/crt": "-----BEGIN CERTIFICATE---------END
CERTIFICATE-----\n",
"mgr/dashboard/jwt_secret": "",
"mgr/dashboard/key": "-----BEGIN PRIVATE KEY---------END PRIVATE
KEY-----\n",
"mgr/devicehealth/last_scrape": "20190729-000743"
}
We have another cluster with centos that was recently upgraded to 14.2.2
from Luminous, no issues with this, but the OS is CentOS 7.6. so not
exactly the same.
At this point, the health is in a health warn. trying to clear that.
Regards,
-Brent
Existing Clusters:
Test: Nautilus 14.2.2 with 3 osd servers, 1 mon/man, 1 gateway, 2 iscsi
gateways ( all virtual on nvme )
US Production(HDD): Nautilus 14.2.2 with 11 osd servers, 3 mons, 4 gateways
behind haproxy LB
UK Production(HDD): Nautilus 14.2.1 with 25 osd servers, 3 mons/man, 3
gateways behind haproxy LB
US Production(SSD): Nautilus 14.2.1 with 6 osd servers, 3 mons/man, 3
gateways behind haproxy LB
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com