[ceph-users] Re: Ceph crash :-(

2024-06-13 Thread Robert Sander
the Ceph packages. download.ceph.com has packages for Ubuntu 22.04 and nothing for 24.04. Therefor I would assume Ubuntu 24.04 is not a supported platform for Ceph (unless you use the cephadm orchestrator and container). BTW: Please keep the discussion on the mailing list. Regards -- Robert Sander

[ceph-users] Re: Ceph crash :-(

2024-06-13 Thread Robert Sander
not use Ceph packages shipped from a distribution but always the ones from download.ceph.com or even better the container images that come with the orchestrator. Why version do your other Ceph nodes run on? Regards -- Robert Sander Heinlein Support GmbH Linux: Akademie - Support - Hosting http

[ceph-users] Re: tuning for backup target cluster

2024-06-04 Thread Robert Sander
table or logical volume signatures. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: Update OS with clean install

2024-06-04 Thread Robert Sander
to do these: * Set host in maintenance mode * Reinstall host with newer OS * Configure host with correct settings (for example cephadm user SSH key etc.) * Unset maintenance mode for the host * For OSD hosts run ceph cephadm osd activate Regards -- Robert Sander Heinlein Consulting GmbH Schwedter

[ceph-users] Re: tuning for backup target cluster

2024-06-04 Thread Robert Sander
multiple block devices and for the orchestrator they are completely separate. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B

[ceph-users] Re: How to create custom container that exposes a listening port?

2024-05-31 Thread Robert Sander
On 5/31/24 16:07, Robert Sander wrote: extra_container_args:   - "--publish 8080/tcp" Never mind, in the custom container service specification it's "args", not "extra_container_args". Regards -- Robert Sander Heinlein Consulting GmbH Schwedter

[ceph-users] How to create custom container that exposes a listening port?

2024-05-31 Thread Robert Sander
-container-arguments Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Si

[ceph-users] Re: How to setup NVMeoF?

2024-05-30 Thread Robert Sander
30T13:59:49.678809906+00:00", grpc_status:12, grpc_message:"Method not found!"}" Is this not production ready? Why is it in the documentation for a released Ceph version? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-su

[ceph-users] Re: How to setup NVMeoF?

2024-05-30 Thread Robert Sander
Hi, On 5/30/24 11:58, Robert Sander wrote: I am trying to follow the documentation at https://docs.ceph.com/en/reef/rbd/nvmeof-target-configure/ to deploy an NVMe over Fabric service. It looks like the cephadm orchestrator in this 18.2.2 cluster uses the image quay.io/ceph/nvmeof:0.0.2

[ceph-users] How to setup NVMeoF?

2024-05-30 Thread Robert Sander
available? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: Rebalance OSDs after adding disks?

2024-05-30 Thread Robert Sander
On 5/30/24 08:53, tpDev Tester wrote: Can someone please point me to the docs how I can expand the capacity of the pool without such problems. Please show the output of ceph status ceph df ceph osd df tree ceph osd crush rule dump ceph osd pool ls detail Regards -- Robert Sander

[ceph-users] Re: We are using ceph octopus environment. For client can we use ceph quincy?

2024-05-30 Thread Robert Sander
On 5/27/24 09:28, s.dhivagar@gmail.com wrote: We are using ceph octopus environment. For client can we use ceph quincy? Yes. -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht

[ceph-users] Re: cephadm basic questions: image config, OS reimages

2024-05-16 Thread Robert Sander
On 5/16/24 17:50, Robert Sander wrote: cephadm osd activate HOST would re-activate the OSDs. Small but important typo: It's ceph cephadm osd activate HOST Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051

[ceph-users] Re: cephadm basic questions: image config, OS reimages

2024-05-16 Thread Robert Sander
m to noout and will try to move other services away from the host if possible. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsfü

[ceph-users] Re: MDS crash in interval_set: FAILED ceph_assert(p->first <= start)

2024-05-10 Thread Robert Sander
On 5/9/24 07:22, Xiubo Li wrote: We are disscussing the same issue in slack thread https://ceph-storage.slack.com/archives/C04LVQMHM9B/p1715189877518529. Why is there a discussion about a bug off-list on a proprietary platform? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str

[ceph-users] Re: MDS 17.2.7 crashes at rejoin

2024-05-06 Thread Robert Sander
Hi, would an update to 18.2 help? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] MDS 17.2.7 crashes at rejoin

2024-05-06 Thread Robert Sander
::v15_2_0::list&, int)+0x290) [0x5614ac87ff90] 13: (MDSContext::complete(int)+0x5f) [0x5614aca41f4f] 14: (MDSIOContextBase::complete(int)+0x534) [0x5614aca426e4] 15: (Finisher::finisher_thread_entry()+0x18d) [0x7f1930b7884d] 16: /lib64/libpthread.so.0(+0x81ca) [0x7f192fac81c

[ceph-users] Re: Remove failed OSD

2024-05-04 Thread Robert Sander
://docs.ceph.com/en/reef/cephadm/services/osd/#remove-an-osd This will make sure that the OSD is not needed any more (data is drained etc). Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19

[ceph-users] Re: ceph recipe for nfs exports

2024-04-29 Thread Robert Sander
e to write to the CephFS at first. Set squash to "no_root_squash" to be able to write as root to the NFS share. Create a directory and change its permissions to someone else. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-

[ceph-users] Re: Ceph Squid released?

2024-04-29 Thread Robert Sander
On 4/29/24 09:36, Alwin Antreich wrote: Who knows. I don't see any packages on download.ceph.com <http://download.ceph.com> for Squid. Ubuntu has them: https://packages.ubuntu.com/noble/ceph Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: Ceph Squid released?

2024-04-29 Thread Robert Sander
members and tiers and to sound the marketing drums a bit. :) The Ubuntu 24.04 release notes also claim that this release comes with Ceph Squid: https://discourse.ubuntu.com/t/noble-numbat-release-notes/39890 Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Ceph Squid released?

2024-04-29 Thread Robert Sander
Hi, https://www.linuxfoundation.org/press/introducing-ceph-squid-the-future-of-storage-today Does the LF know more than the mailing list? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051

[ceph-users] Re: Add node-exporter using ceph orch

2024-04-26 Thread Robert Sander
: '*' If you apply this YAML code the orchestrator should deploy one node-exporter daemon to each host of the cluster. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin

[ceph-users] Re: Add node-exporter using ceph orch

2024-04-26 Thread Robert Sander
and its placement strategy. What does your node-exporter service look like? ceph orch ls node-exporter --export Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin

[ceph-users] Re: ceph recipe for nfs exports

2024-04-25 Thread Robert Sander
t;pseudo path" This is an NFSv4 concept. It allows to mount a virtual root of the NFS server and access all exports below it without having to mount each one separately. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de

[ceph-users] Re: Have a problem with haproxy/keepalived/ganesha/docker

2024-04-16 Thread Robert Sander
and the NFS client cannot be "load balanced" to another backend NFS server. There is no use to configure an ingress service currently without failover. The NFS clients have to remount the NFS share in case of their current NFS server dies anyway. Regards -- Robert Sander Heinlein Consu

[ceph-users] Re: Call for Interest: Managed SMB Protocol Support

2024-03-25 Thread Robert Sander
running Debian since before then you have user IDs and group IDs in the range 500 - 1000. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B G

[ceph-users] Re: Upgrading from Reef v18.2.1 to v18.2.2

2024-03-21 Thread Robert Sander
Hi, On 3/21/24 14:50, Michael Worsham wrote: Now that Reef v18.2.2 has come out, is there a set of instructions on how to upgrade to the latest version via using Cephadm? Yes, there is: https://docs.ceph.com/en/reef/cephadm/upgrade/ Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: OSD does not die when disk has failures

2024-03-19 Thread Robert Sander
Hi, On 3/19/24 13:00, Igor Fedotov wrote: translating EIO to upper layers rather than crashing an OSD is a valid default behavior. One can alter this by setting bluestore_fail_eio parameter to true. What benefit lies in this behavior when in the end client IO stalls? Regards -- Robert

[ceph-users] Re: PGs with status active+clean+laggy

2024-03-05 Thread Robert Sander
Hi, On 3/5/24 13:05, ricardom...@soujmv.com wrote: I have a ceph quincy cluster with 5 nodes currently. But only 3 with SSDs. Do not mix HDDs and SSDs in the same pool. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel

[ceph-users] Re: Uninstall ceph rgw

2024-03-05 Thread Robert Sander
as created. They usually have "rgw" in their name. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Hei

[ceph-users] Re: Upgraded 16.2.14 to 16.2.15

2024-03-05 Thread Robert Sander
Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: Cephadm and Ceph.conf

2024-02-26 Thread Robert Sander
reads=500" rgw_crypt_require_ssl = false ceph config assimilate-conf may be of help here. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlotten

[ceph-users] Re: Cephadm and Ceph.conf

2024-02-26 Thread Robert Sander
can adjust settings with "ceph config" or the Configuration tab of the Dashboard. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HR

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread Robert Sander
/reef/cephadm/install/#deployment-in-an-isolated-environment Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein

[ceph-users] Re: Some questions about cephadm

2024-02-26 Thread Robert Sander
, Password) Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz: Berlin

[ceph-users] Re: Understanding subvolumes

2024-02-02 Thread Robert Sander
and other cloud technologies. If you run a classical file service on top of CephFS you usually do not need subvolumes but can go with normal quotas on directories. Regards -- Robert Sander Heinlein Support GmbH Linux: Akademie - Support - Hosting http://www.heinlein-support.de Tel: 030-405051-43

[ceph-users] Re: Questions about the CRUSH details

2024-01-25 Thread Robert Sander
t of PGs, to not run into statistical edge cases. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Hei

[ceph-users] Re: How many pool for cephfs

2024-01-24 Thread Robert Sander
e the SSDs for the OSDs' RocksDB? Where do you plan to store the metadata pools for CephFS? They should be stored on fats media. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgeri

[ceph-users] Re: How many pool for cephfs

2024-01-24 Thread Robert Sander
question, should I have a designated pool for S3 storage or can/should I use the same cephfs_data_replicated/erasure pool ? No, S3 needs its own pools. It cannot re-use CephFS pools. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de

[ceph-users] Re: Cephadm orchestrator and special label _admin in 17.2.7

2024-01-19 Thread Robert Sander
not update /etc/ceph/ceph.conf. Only when I again do "ceph mgr fail" the new MGR will update /etc/ceph/ceph.conf on the hosts labeled with _admin. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43

[ceph-users] Re: Cephadm orchestrator and special label _admin in 17.2.7

2024-01-18 Thread Robert Sander
was at "*", so all hosts. I have set that to "label:_admin". It still does not put ceph.conf into /etc/ceph when adding the label _admin. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43

[ceph-users] Re: Cephadm orchestrator and special label _admin in 17.2.7

2024-01-18 Thread Robert Sander
cephtest23:/etc/ceph/ceph.client.admin.keyring 2024-01-18T11:47:08.212303+0100 mgr.cephtest32.ybltym [INF] Updating cephtest23:/var/lib/ceph/ba37db20-2b13-11eb-b8a9-871ba11409f6/config/ceph.client.admin.keyring Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https

[ceph-users] Cephadm orchestrator and special label _admin in 17.2.7

2024-01-18 Thread Robert Sander
. Both files are placed into the /var/lib/ceph//config directory. Has something changed? ¹: https://docs.ceph.com/en/quincy/cephadm/host-management/#special-host-labels Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030

[ceph-users] Re: cephadm bootstrap on 3 network clusters

2024-01-03 Thread Robert Sander
with the cluster network. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: cephadm bootstrap on 3 network clusters

2024-01-03 Thread Robert Sander
. It is used to determine the public network. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz

[ceph-users] Re: Ceph Docs: active releases outdated

2024-01-03 Thread Robert Sander
Hi Eugen, the release info is current only in the latest branch of the documentation: https://docs.ceph.com/en/latest/releases/ Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Robert Sander
-- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz: Berlin

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Robert Sander
Hi, On 22.12.23 11:41, Albert Shih wrote: for n in 1-100 Put off line osd on server n Uninstall docker on server n Install podman on server n redeploy on server n end Yep, that's basically the procedure. But first try it on a test cluster. Regards -- Robert Sander Heinlein

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Robert Sander
On 21.12.23 22:27, Anthony D'Atri wrote: It's been claimed to me that almost nobody uses podman in production, but I have no empirical data. I even converted clusters from Docker to podman while they stayed online thanks to "ceph orch redeploy". Regards -- Robert Sande

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-21 Thread Robert Sander
Hi, On 21.12.23 15:13, Nico Schottelius wrote: I would strongly recommend k8s+rook for new clusters, also allows running Alpine Linux as the host OS. Why would I want to learn Kubernetes before I can deploy a new Ceph cluster when I have no need for K8s at all? Regards -- Robert Sander

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-21 Thread Robert Sander
. Everything needed for the Ceph containers is provided by podman. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-21 Thread Robert Sander
... CentOS thing... what distro appears to be the most straightforward to use with Ceph? I was going to try and deploy it on Rocky 9. Any distribution with a recent systemd, podman, LVM2 and time synchronization is viable. I prefer Debian, others RPM-based distributions. Regards -- Robert Sander

[ceph-users] Re: EC Profiles & DR

2023-12-05 Thread Robert Sander
On 12/5/23 10:06, duluxoz wrote: I'm confused - doesn't k4 m2 mean that you can loose any 2 out of the 6 osds? Yes, but OSDs are not a good failure zone. The host is the smallest failure zone that is practicable and safe against data loss. Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: EC Profiles & DR

2023-12-05 Thread Robert Sander
r that you risk to lose data. Erasure coding is possible with a cluster size of 10 nodes or more. With smaller clusters you have to go with replicated pools. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-4

[ceph-users] Re: ceph osd dump_historic_ops

2023-12-01 Thread Robert Sander
s the UUID of the Ceph cluster, $OSDID is the OSD id. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Si

[ceph-users] Re: Where is a simple getting started guide for a very basic cluster?

2023-11-28 Thread Robert Sander
to the list. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: Where is a simple getting started guide for a very basic cluster?

2023-11-26 Thread Robert Sander
or NTP) - LVM2 for provisioning storage devices Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz

[ceph-users] Re: ceph storage pool error

2023-11-08 Thread Robert Sander
to the pool? Can you help me?1 and 2 clusters are working. I want to view my data from them and then transfer them to another place. How can I do this? I have never used Ceph before. Please send the output of: ceph -s ceph health detail ceph osd df tree Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: Emergency, I lost 4 monitors but all osd disk are safe

2023-11-02 Thread Robert Sander
object is stored on the OSD data partition and without it nobody knows where each object is. The data is lost. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin

[ceph-users] Re: Emergency, I lost 4 monitors but all osd disk are safe

2023-11-02 Thread Robert Sander
/latest/man/8/monmaptool/ https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovering-a-monitor-s-broken-monmap This way the remaining MON will be the only one in the map and will have quorum and the cluster will work again. Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: Emergency, I lost 4 monitors but all osd disk are safe

2023-11-02 Thread Robert Sander
-store-failures Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: How to deal with increasing HDD sizes ? 1 OSD for 2 LVM-packed HDDs ?

2023-10-18 Thread Robert Sander
reasons. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: cephadm configuration in git

2023-10-11 Thread Robert Sander
the service specifications with "ceph orch ls --export" and import the YAML file with "ceph orch apply -i …". This does not cover the hosts in the cluster. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030

[ceph-users] Re: Status of IPv4 / IPv6 dual stack?

2023-09-20 Thread Robert Sander
ntation was a little be clearer on this topic. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Hei

[ceph-users] Status of IPv4 / IPv6 dual stack?

2023-09-15 Thread Robert Sander
uestions: Is a dual stacked networking with IPv4 and IPv6 now supported or not? From which version on is it considered stable? Are OSDs now able to register themselves with two IP addresses in the cluster map? MONs too? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b,

[ceph-users] Re: Ceph services failing to start after OS upgrade

2023-09-13 Thread Robert Sander
on the Proxmox forum at https://forum.proxmox.com/ as they distribute their own Ceph packages. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B

[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-11 Thread Robert Sander
. All OSDs will immediately know about the new MONs. The same goes when removing an old MON. After that you have to update the ceph.conf on each host to make the change "reboot safe". No need to restart any other component including OSDs. Regards -- Robert Sander Heinlein Consu

[ceph-users] Re: Status of diskprediction MGR module?

2023-08-28 Thread Robert Sander
? How do I add a dashboard in cephadm managed Grafana that shows the values from smartctl_exporter? Where do I get such a dashboard? How do I add alerts to the cephadm managed Alert-Manager? Where do I get useful alert definitions for smartctl_exporter metrics? Regards -- Robert Sander Heinlein

[ceph-users] Status of diskprediction MGR module?

2023-08-28 Thread Robert Sander
have a cluster where SMART data is available from the disks (tested with smartctl and visible in the Ceph dashboard), but even with an enabled diskprediction_local module no health and lifetime info is shown. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https

[ceph-users] Re: cephadm orchestrator does not restart daemons [was: ceph orch upgrade stuck between 16.2.7 and 16.2.13]

2023-08-16 Thread Robert Sander
processes must have used a lock that blocked new cephadm commands. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer

[ceph-users] cephadm orchestrator does not restart daemons [was: ceph orch upgrade stuck between 16.2.7 and 16.2.13]

2023-08-16 Thread Robert Sander
phmon01 [DBG] _kick_serve_loop The container for crash.cephmon01 does not get restarted. It looks like the service loop does not get executed. Can we see what jobs are in this queue and why they do not get executed? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 1011

[ceph-users] Re: ceph orch upgrade stuck between 16.2.7 and 16.2.13

2023-08-15 Thread Robert Sander
the image from quay.io. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: ceph orch upgrade stuck between 16.2.7 and 16.2.13

2023-08-15 Thread Robert Sander
is that there is no information on which OSD cephadm tries to upgrade next. There is no failure reported. It seems to just sit there and wait for something. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19

[ceph-users] ceph orch upgrade stuck between 16.2.7 and 16.2.13

2023-08-15 Thread Robert Sander
42503) pacific (stable)": 2 }, "overall": { "ceph version 16.2.13 (5378749ba6be3a0868b51803968ee9cde4833a3e) pacific (stable)": 56, "ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (sta

[ceph-users] Re: Ceph 17.2.6 alert-manager receives error 500 from inactive MGR

2023-07-27 Thread Robert Sander
On 7/27/23 13:27, Eugen Block wrote: [2] https://github.com/ceph/ceph/pull/47011 This PR implements the 204 HTTP code that I see in my test cluster. I wonder why in the same situation the other cluster returns a 500 here. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b

[ceph-users] Ceph 17.2.6 alert-manager receives error 500 from inactive MGR

2023-07-26 Thread Robert Sander
mgr handle_mgr_map I am now activating We have a test cluster running also with version 17.2.6 where this does not happen. In this test cluster the passive MGRs return an HTTP code 204 when the alert-manager tries to request /api/prometheus_receiver. What is ha

[ceph-users] Re: Per minor-version view on docs.ceph.com

2023-07-13 Thread Robert Sander
not shine a good light on the project. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin

[ceph-users] Re: Are replicas 4 or 6 safe during network partition? Will there be split-brain?

2023-07-10 Thread Robert Sander
would still be working in site A since there are still 2 OSDs, even without mon quorum. The site without MON quorum will stop to work completely. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030

[ceph-users] Re: device class for nvme disk is ssd

2023-06-28 Thread Robert Sander
internally? No. When creating the OSD ceph-volume looks at /sys/class/block/DEVICE/queue/rotational to determine if it's an HDD (file contains 1) or not (file contains 0). If you need to distinguish between SSD and NVMe you can manually assign another device class to the OSDs. Regards -- Robert

[ceph-users] Re: Ceph iSCSI GW not working with VMware VMFS and Windows Clustered Storage Volumes (CSV)

2023-06-19 Thread Robert Sander
/rbd/iscsi-initiators/ AFAIK VMware uses these in VMFS. Regards -- Robert Sander Heinlein Support GmbH Linux: Akademie - Support - Hosting http://www.heinlein-support.de Tel: 030-405051-43 Fax: 030-405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg

[ceph-users] Cluster without messenger v1, new MON still binds to port 6789

2023-06-01 Thread Robert Sander
Hi, a cluster has ms_bind_msgr1 set to false in the config database. Newly created MONs still listen on port 6789 and add themselves as providing messenger v1 into the monmap. How do I change that? Shouldn't the MONs use the config for ms_bind_msgr1? Regards -- Robert Sander Heinlein

[ceph-users] Re: Encryption per user Howto

2023-05-26 Thread Robert Sander
want it pay with performance penalties. I understand this use case. But this would still mean that the client encrypts the data. In your case the CephFS mount or with S3 the rados-gateway. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https

[ceph-users] Re: Encryption per user Howto

2023-05-23 Thread Robert Sander
On 23.05.23 08:42, huxia...@horebdata.cn wrote: Indeed, the question is on server-side encryption with keys managed by ceph on a per-user basis What kind of security to you want to achieve with encryption keys stored on the server side? Regards -- Robert Sander Heinlein Support GmbH Linux

[ceph-users] Re: OSD_TOO_MANY_REPAIRS on random OSDs causing clients to hang

2023-04-26 Thread Robert Sander
On 26.04.23 13:24, Thomas Hukkelberg wrote: [WRN] OSD_TOO_MANY_REPAIRS: Too many repaired reads on 1 OSDs osd.34 had 9936 reads repaired Are there any messages in the kernel log that indicate this device has read errors? Have you considered replacing the disk? Regards -- Robert Sander

[ceph-users] Re: How to replace an HDD in a OSD with shared SSD for DB/WAL

2023-04-21 Thread Robert Sander
-UUID is the logical volume for the old OSD you could have run this: ceph orch osd rm 23 replace the faulty HDD ceph orch daemon add osd compute11:data_devices=/dev/sda,db_devices=ceph-UUID/osd-db-UUID This will reuse the existing logical volume for the OSD DB. Regards -- Robert Sander

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-18 Thread Robert Sander
On 18.04.23 06:12, Lokendra Rathour wrote: but if I try mounting from a normal Linux machine with connectivity enabled between Ceph mon nodes, it gives the error as stated before. Have you installed ceph-common on the "normal Linux machine"? Regards -- Robert Sander Heinlein Su

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-04-17 Thread Robert Sander
On 14.04.23 12:17, Lokendra Rathour wrote: *mount: /mnt/image: mount point does not exist.* Have you created the mount point? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19

[ceph-users] Re: Adding new server to existing ceph cluster - with separate block.db on NVME

2023-03-29 Thread Robert Sander
ed: true" to the specification. After that ceph orch apply -i osd.yml Or you could just remove the specification with "ceph orch rm NAME". The OSD service will be removed but the OSD will remain. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berli

[ceph-users] Re: Adding new server to existing ceph cluster - with separate block.db on NVME

2023-03-28 Thread Robert Sander
orchestrator? Which version? Have you tried ceph orch daemon add osd host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/nvme0 as shown on https://docs.ceph.com/en/quincy/cephadm/services/osd/ ? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https

[ceph-users] Re: Ceph cluster out of balance after adding OSDs

2023-03-28 Thread Robert Sander
the device class and assign it to the pool charlotte.rgw.buckets.data. After that the autoscaler will be able to work again. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht

[ceph-users] Re: Ceph cluster out of balance after adding OSDs

2023-03-27 Thread Robert Sander
On 27.03.23 16:34, Pat Vaughan wrote: Yes, all the OSDs are using the SSD device class. Do you have multiple CRUSH rules by chance? Are all pools using the same CRUSH rule? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de

[ceph-users] Re: Ceph cluster out of balance after adding OSDs

2023-03-27 Thread Robert Sander
multiple device classes. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Si

[ceph-users] Re: Upgrade 16.2.11 -> 17.2.0 failed

2023-03-16 Thread Robert Sander
On 14.03.23 15:22, b...@nocloud.ch wrote: ah.. ok, it was not clear to me that skipping minor version when doing a major upgrade was supported. You can even skip one major version when doing an upgrade. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: Upgrade 16.2.11 -> 17.2.0 failed

2023-03-14 Thread Robert Sander
On 14.03.23 14:21, bbk wrote: ` # ceph orch upgrade start --ceph-version 17.2.0 I would never recommend to update to a .0 release. Why not go directly to the latest 17.2.5? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de

[ceph-users] Re: Can't install cephadm on HPC

2023-03-13 Thread Robert Sander
these CLI tools are available. Regards -- Robert Sander Heinlein Support GmbH Linux: Akademie - Support - Hosting http://www.heinlein-support.de Tel: 030-405051-43 Fax: 030-405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein

[ceph-users] Re: Error deploying Ceph Qunicy using ceph-ansible 7 on Rocky 9

2023-03-08 Thread Robert Sander
as this is the recommended installation method. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 220009 B / Amtsgericht Berlin-Charlottenburg

[ceph-users] Re: unable to calc client keyring client.admin placement PlacementSpec(label='_admin'): Cannot place : No matching hosts for label _admin

2023-03-03 Thread Robert Sander
just label one of the cluster hosts with _admin: ceph orch host label add hostname _admin https://docs.ceph.com/en/quincy/cephadm/host-management/#special-host-labels https://docs.ceph.com/en/quincy/cephadm/operations/#client-keyrings-and-configs Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Re: Theory about min_size and its implications

2023-03-03 Thread Robert Sander
s a really bad idea outside of a desaster scenario where the other two copies are completely lost to a fire. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Char

[ceph-users] Re: s3 compatible interface

2023-02-28 Thread Robert Sander
On 28.02.23 16:31, Marc wrote: Anyone know of a s3 compatible interface that I can just run, and reads/writes files from a local file system and not from object storage? Have a look at Minio: https://min.io/product/overview#architecture Regards -- Robert Sander Heinlein Support GmbH Linux

  1   2   3   >