r, cf. above.
> ssh-copy-id -f -i /etc/ceph/ceph.pub root@lxbk0374
> ceph orch host add lxbk0374 10.20.2.161
-> 'ceph orch host ls' shows that node no longer Offline.
-> Repeat with all the other hosts, and everything looks fine also from the
orch view.
My question: Did I
let us see why it's failing, or, if there is no longer an issue
connecting to the host, should mark the host online again.
Thanks,
- Adam King
On Thu, Jun 23, 2022 at 12:30 PM Thomas Roth wrote:
Hi all,
found this bug https://tracker.ceph.com/issues/51629 (Octopus 15.2.13),
reproduc
eem to be unaffected.
Cheers
Thomas
--
----
Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291
GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.gsi.de
ver?
This is all Quincy, cephadm, so there is no ceph.conf anymore, and I did not
find the command to inject my failure domain into the config database...
Regards
Thomas
--
----
Thomas Roth IT-HPC-Linux
Location: SB3 2.291
their nodes? Can't
really believe that.
Regards
Thomas
--
----
Thomas Roth
Department: Informationstechnologie
Location: SB3 2.291
GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstraße 1, 64291 Darmstadt, Germany, www.
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-pg/
talks about changing 'osd_crush_chooseleaf_type' before creating monitors or OSDs, for the special
case of a 1-node-cluster.
However, the documentation fails to explain how/where to set this option, seeing that with 'cep
Hello,
just found that this "feature" is not restricted to upgrades - I just tried to bootstrap an entirely new cluster with Quincy, also with the fatal
switch to non-root-user: adding the second mon results in
> Unable to write lxmon1:/etc/ceph/ceph.conf: scp: /tmp/etc/ceph/ceph.conf.new:
Per
hod
(which would be much more ideal), hence there is no obvious way to use
separate db_devices, but this does look to work for me as far as it goes.
Hope that helps
Peter Childs
On Tue, 25 Jan 2022, 17:53 Thomas Roth, wrote:
Would like to know that as well.
I have the same setup - cephadm,
.io
To unsubscribe send an email to ceph-users-le...@ceph.io
--
----
Thomas Roth
HPC Department
GSI Helmholtzzentrum für Schwerionenforschung GmbH
Planckstr. 1, 64291 Darmstadt, http://www.gsi.de/
Gesellschaft mit beschraenkter Haftung
Sitz der Gesellschaft / Reg
rds,
Thomas
--
--------
Thomas Roth
Department: IT
GSI Helmholtzzentrum für Schwerionenforschung GmbH
www.gsi.de
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
try cephfs on ~10 servers with 70 HDD each. That would make each system
having to deal with 70 OSDs, on 70 LVs?
Really no aggregation of the disks?
Regards,
Thomas
--
Thomas Roth
Department: IT
GSI Helmholtzzentrum für
11 matches
Mail list logo