mox.com/threads/ceph-16-2-6-cephfs-failed-after-upgrade-from-16-2-5.97742/
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KQ5A5OWRIUEOJBC7VILBGDIKPQGJQIWN/
El 28/12/21 a las 15:02, Tecnologia Charne.Net escribió:
Today, I upgraded from Pacific 16.2.6 to 16.2.7.
Since som
Today, I upgraded from Pacific 16.2.6 to 16.2.7.
Since some items in dashboard weren't enabled (Cluster->Hosts->Versions,
for example) because I haven't cephadm enabled, I activaded it and
adopting every mon, mgr, osd on cluster, following instructions in
Thanks, Eugen, for your quick answer!
Yes, If I set "show_image_direct_url" to false, creation of volumes from
images works fine.
But creation takes much more time, because of data movements out-and-in
ceph cluster, instead snap and copy-on-write approach.
All documentation recommends
Thanks Stefan!
Compiling crush map by hand on production cluster makes me sweat
but we like to take risks, don't we?
El 14/9/20 a las 11:48, Stefan Kooman escribió:
On 2020-09-14 16:09, André Gemünd wrote:
Same happened to us two weeks ago using nautilus, although we added the rules
and
Exactly! I created a replicated-hdd rule and set it to an existing small
pool without any changes on OSDs (all HDD) and PGs starts migration...
It seems like new rules forces migrations...
El 14/9/20 a las 11:09, André Gemünd escribió:
Same happened to us two weeks ago using nautilus,
Hello!
We have a Ceph cluster with 30 HDD 4 TB in 6 hosts, only for RBD.
Now, we're receiving other 6 servers with 6 SSD 2 TB each and we want to
create a separate pool for RBD on SSD, and let unused and backup volumes
stays in HDD.
I have some questions:
As I am only using
Yes, I was going to suggest the same on this page:
https://docs.ceph.com/docs/master/releases/octopus/
-Javier
El 25/3/20 a las 14:20, Bryan Stillwell escribió:
On Mar 24, 2020, at 5:38 AM, Abhishek Lekshmanan wrote:
#. Upgrade monitors by installing the new packages and restarting the
Hello!
I updated monitors to 14.2.8 and I have now:
health: HEALTH_ERR
Module 'telemetry' has failed: cannot concatenate 'str' and
'UUID' objects
Anyone has the same error?
Thanks!
-Javier
___
ceph-users mailing list --
Any thoughts?
I tried disable an re-enable the module, but the error remains.
The telemetry server seems to be down. People have been notified :-)
Wido
Thanks!
The message HEALTH_ERR, in red, on the front of the dashboard, is an
interesting way to start the day. ;)
-Javier
Hello!
Today, I started the day with
# ceph -s
cluster:
health: HEALTH_ERR
Module 'telemetry' has failed:
HTTPSConnectionPool(host='telemetry.ceph.com', port=443): Max retries
exceeded with url: /report (Caused by
NewConnectionError('at 0x7fa97e5a4f90>: Failed to establish
10 matches
Mail list logo