[ceph-users] Re: cephadm orch thinks hosts are offline

2022-06-29 Thread Thomas Roth
r, cf. above. > ssh-copy-id -f -i /etc/ceph/ceph.pub root@lxbk0374 > ceph orch host add lxbk0374 10.20.2.161 -> 'ceph orch host ls' shows that node no longer Offline. -> Repeat with all the other hosts, and everything looks fine also from the orch view. My question: Did I

[ceph-users] Re: cephadm orch thinks hosts are offline

2022-06-27 Thread Thomas Roth
let us see why it's failing, or, if there is no longer an issue connecting to the host, should mark the host online again. Thanks, - Adam King On Thu, Jun 23, 2022 at 12:30 PM Thomas Roth wrote: Hi all, found this bug https://tracker.ceph.com/issues/51629 (Octopus 15.2.13), reproduc

[ceph-users] cephadm orch thinks hosts are offline

2022-06-23 Thread Thomas Roth
eem to be unaffected. Cheers Thomas -- ---- Thomas Roth Department: Informationstechnologie Location: SB3 2.291 GSI Helmholtzzentrum für Schwerionenforschung GmbH Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de

[ceph-users] active+undersized+degraded due to OSD size differences?

2022-06-19 Thread Thomas Roth
ver? This is all Quincy, cephadm, so there is no ceph.conf anymore, and I did not find the command to inject my failure domain into the config database... Regards Thomas -- ---- Thomas Roth IT-HPC-Linux Location: SB3 2.291

[ceph-users] ceph.pub not presistent over reboots?

2022-06-15 Thread Thomas Roth
their nodes? Can't really believe that. Regards Thomas -- ---- Thomas Roth Department: Informationstechnologie Location: SB3 2.291 GSI Helmholtzzentrum für Schwerionenforschung GmbH Planckstraße 1, 64291 Darmstadt, Germany, www.

[ceph-users] set configuration options in the cephadm age

2022-06-14 Thread Thomas Roth
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/ talks about changing 'osd_crush_chooseleaf_type' before creating monitors or OSDs, for the special case of a 1-node-cluster. However, the documentation fails to explain how/where to set this option, seeing that with 'cep

[ceph-users] Re: v17.2.0 Quincy released

2022-05-25 Thread Thomas Roth
Hello, just found that this "feature" is not restricted to upgrades - I just tried to bootstrap an entirely new cluster with Quincy, also with the fatal switch to non-root-user: adding the second mon results in > Unable to write lxmon1:/etc/ceph/ceph.conf: scp: /tmp/etc/ceph/ceph.conf.new: Per

[ceph-users] Re: Multipath and cephadm

2022-01-30 Thread Thomas Roth
hod (which would be much more ideal), hence there is no obvious way to use separate db_devices, but this does look to work for me as far as it goes. Hope that helps Peter Childs On Tue, 25 Jan 2022, 17:53 Thomas Roth, wrote: Would like to know that as well. I have the same setup - cephadm,

[ceph-users] Re: Multipath and cephadm

2022-01-25 Thread Thomas Roth
.io To unsubscribe send an email to ceph-users-le...@ceph.io -- ---- Thomas Roth HPC Department GSI Helmholtzzentrum für Schwerionenforschung GmbH Planckstr. 1, 64291 Darmstadt, http://www.gsi.de/ Gesellschaft mit beschraenkter Haftung Sitz der Gesellschaft / Reg

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Thomas Roth
rds, Thomas -- -------- Thomas Roth Department: IT GSI Helmholtzzentrum für Schwerionenforschung GmbH www.gsi.de ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le

[ceph-users] HDD <-> OSDs

2021-06-22 Thread Thomas Roth
try cephfs on ~10 servers with 70 HDD each. That would make each system having to deal with 70 OSDs, on 70 LVs? Really no aggregation of the disks? Regards, Thomas -- Thomas Roth Department: IT GSI Helmholtzzentrum für