[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Robert Sander
I just run ceph orch upgrade start Why does the orchestrator not run the necessary steps? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Robert Sander
Hi, I had to run ceph fs set cephfs max_mds 1 ceph fs set cephfs allow_standby_replay false and stop all MDS and NFS containers and start one after the other again to clear this issue. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein

[ceph-users] Re: Cephfs - MDS all up:standby, not becoming up:active

2021-09-17 Thread Robert Sander
-- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 220009 B Geschäftsführer: Peer Heinlein - Sitz: Berlin ___ ceph

[ceph-users] Re: Ignore Ethernet interface

2021-09-14 Thread Robert Sander
. But then it will not work as the same IP subnet cannot span multiple broadcast domains. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin

[ceph-users] Re: Ignore Ethernet interface

2021-09-13 Thread Robert Sander
The Linux kernel will happily answer ARP requests on any interface for the IPs it has configured anywhere. That means you have a constant ARP flapping in your network. Make the three interfaces bonded and configure all three IPs on the bonded interface. Regards -- Robert Sander Heinlein Consu

[ceph-users] Re: SSDs/HDDs in ceph Octopus

2021-09-10 Thread Robert Sander
hould have a uniform class of storage. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Pee

[ceph-users] Re: Performance optimization

2021-09-07 Thread Robert Sander
be faster, to write it to just one ssd, instead of writing it to the disk directly. Usually one SSD carries the WAL and RocksDB of four to five HDD-OSDs. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax

[ceph-users] Re: Performance optimization

2021-09-06 Thread Robert Sander
w the data distribution among the OSDs. Are all of these HDDs? Are these HDDs equipped with RocksDB on SSD? HDD only will have abysmal performance. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Re: Performance optimization

2021-09-06 Thread Robert Sander
of block devices with the same size distribution in each node you will get an even data distribution. If you have a node with 4 3TB drives and one with 4 6TB drives Ceph cannot use the 6TB drives efficiently. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: A simple erasure-coding question about redundance

2021-08-27 Thread Robert Sander
. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz: Berlin

[ceph-users] Re: How to safely turn off a ceph cluster

2021-08-11 Thread Robert Sander
h cluster? ceph osd set noout and after the cluster has been booted again and every OSD joined: ceph osd unset noout Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charl

[ceph-users] Re: Ceph Pacific mon is not starting after host reboot

2021-08-10 Thread Robert Sander
daemons (outside of osds I believe) from offline hosts. Sorry for maybe being rude but how on earth does one come up with the idea to automatically remove components from a cluster where just one node is currently rebooting without any operator interference? Regards -- Robert Sander Heinlein

[ceph-users] Re: Size of cluster

2021-08-09 Thread Robert Sander
have 3 nodes with each 5x 12TB (60TB) and 2 nodes with each 4x 18TB (72TB) the maximum usable capacity will not be the sum of all disks. Remember that Ceph tries to evenly distribute the data. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein

[ceph-users] RocksDB resharding does not work

2021-07-08 Thread Robert Sander
8 17:13:46 cephtest24 bash[4161252]: debug 2021-07-08T15:13:46.825+ 7efc32db4080 -1 ** ERROR: osd init failed: (5) Input/output error How do I correct the issue? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-26 Thread Robert Sander
building and hosting for open source projects is solved with the openSUSE build service: https://build.opensuse.org/ But I think what Sage meant was e.g. different versions of GCC on the distributions and not being able to use all the latest features needed for compiling Ceph. Regards -- Robe

[ceph-users] Re: pacific installation at ubuntu 20.04

2021-06-24 Thread Robert Sander
ssing between these two steps. The first creates /etc/apt/sources.list.d/ceph.list and the second installs packages, but the repo list was never updated. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 0

[ceph-users] Re: HDD <-> OSDs

2021-06-22 Thread Robert Sander
could theoretically RAID0 multiple disks and then put an OSD on top of that but this would create very large OSDs which are not good for recovering data. Recovering such a "beast" just would take too long. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http

[ceph-users] Re: Failover with 2 nodes

2021-06-15 Thread Robert Sander
On 15.06.21 15:16, nORKy wrote: > Why is there no failover ?? Because only one MON out of two is not in the majority to build a quorum. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051

[ceph-users] Re: orch upgrade mgr starts too slow and is terminated?

2021-05-07 Thread Robert Sander
de? I had success with stopping the "looping" mgr container via "systemctl stop" on the node. Cephadm then switches to another MGR to continue the upgrade. After that I just started the stopped mgr container and the upgrade continued. Regards -- Robert Sander Heinlein Consulting GmbH S

[ceph-users] Re: orch upgrade mgr starts too slow and is terminated?

2021-05-06 Thread Robert Sander
Am 06.05.21 um 17:18 schrieb Sage Weil: > I hit the same issue. This was a bug in 16.2.0 that wasn't completely > fixed, but I think we have it this time. Kicking of a 16.2.3 build > now to resolve the problem. Great. I also hit that today. Thanks for fixing it quickly. Regards -

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread Robert Sander
ill lead to data loss or at least intermediate unavailability. The situation is now that all copies (resp. EC chunks) for a PG are stored on OSDs of the same host. These PGs will be unavailable if the host is down. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread Robert Sander
the mds suffer when only 4% of the osd goes > down (in the same node). I need to modify the crush map? With an unmodified crush map and the default placement rule this should not happen. Can you please show the output of "ceph osd crush rule dump"? Regards -- Robert Sander Hein

[ceph-users] Re: Ceph cluster not recover after OSD down

2021-05-05 Thread Robert Sander
crush map. It looks like the OSD is the failure zone, and not the host. If it woould be the host the failure of any number of OSDs in a single host would not bring PGs down. For the default redundancy rule and pool size 3 you need three separate hosts. Regards -- Robert Sander Heinlein Consulting GmbH

[ceph-users] Download-Mirror eu.ceph.com misses Debian Release file

2021-04-22 Thread Robert Sander
Hi, to whomever it may concern: The mirror server eu.ceph.com does to carry the Release files for 15.2.11 in https://eu.ceph.com/debian-15.2.11/dists/*/ and 16.2.1 in https://eu.ceph.com/debian-16.2.1/dists/*/ Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin

[ceph-users] Re: After upgrade to 15.2.11 no access to cluster any more

2021-04-22 Thread Robert Sander
Am 22.04.21 um 09:07 schrieb Robert Sander: > What should I do? I should also upgrade the CLI client which still was at 15.2.8 (Ubuntu 20.04) because a "ceph orch upgrade" run only updates the software inside the containers. Regards -- Robert Sander Heinlein Consulting GmbH Schwed

[ceph-users] After upgrade to 15.2.11 no access to cluster any more

2021-04-22 Thread Robert Sander
ied (error connecting to the cluster) What should I do? Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Char

[ceph-users] Re: ceph orch upgrade fails when pulling container image

2021-04-21 Thread Robert Sander
Hi, Am 21.04.21 um 10:14 schrieb Robert Sander: > How do I update a Ceph cluster in this situation? I learned that I need to create an account on the website hub.docker.com to be able to download Ceph container images in the future. With the credentials I need to run "docker login&

[ceph-users] ceph orch upgrade fails when pulling container image

2021-04-21 Thread Robert Sander
Hi, # docker pull ceph/ceph:v16.2.1 Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit How do I update a Ceph cluster in this situation? Regards -- Robert

[ceph-users] Re: cephadm custom mgr modules

2021-04-12 Thread Robert Sander
Hi, this is one of the use cases mentioned in Tim Serong's talk: https://youtu.be/pPZsN_urpqw Containers are great for deploying a fixed state of a software project (a release), but not so much for the development of plugins etc. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str

[ceph-users] Re: RGW failed to start after upgrade to pacific

2021-04-12 Thread Robert Sander
ould not upgrade to Pacific currently. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Pe

[ceph-users] Re: Problem using advanced OSD layout in octopus

2021-04-06 Thread Robert Sander
Hi, The DB device needs to be empty for an automatic OSD service. The service will then create N db slots using logical volumes and not partitions. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030

[ceph-users] Re: Pacific unable to configure NFS-Ganesha

2021-04-05 Thread Robert Sander
Hi, I forgot to mention that CephFS is enabled and working. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer

[ceph-users] Re: RGW failed to start after upgrade to pacific

2021-04-05 Thread Robert Sander
t; bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed > to start datalog_rados service ((5) Input/output error > bash[9823]: debug 2021-04-04T13:01:04.995+ 7ff80f172440 0 ERROR: failed > to init services (ret=(5) Input/output error) I see the same issues on a

[ceph-users] Pacific unable to configure NFS-Ganesha

2021-04-05 Thread Robert Sander
d condition which prevented it from fulfilling the request.", "request_id": "e89b8519-352f-4e44-a364-6e6faf9dc533"} '] I have no radosgatewa

[ceph-users] Re: Is metadata on SSD or bluestore cache better?

2021-04-05 Thread Robert Sander
B volumes and one OSD on each SSD. HDD only SSDs are quite slow. If you do not have enough SSDs for them go with an SSD only cephfs metadata pool. Regards -- Robert Sander Heinlein Consulting GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] OpenSSL security update for Octopus container?

2021-03-26 Thread Robert Sander
check docker.io/ceph/ceph:v15" but it tells me that the containers do not need to be upgraded. How will this security fix of OpenSSL be deployed in a timely manner to users of the Ceph container images? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://ww

[ceph-users] Re: lvm fix for reseated reseated device

2021-03-15 Thread Robert Sander
ady rebooted the box so I won't be able to > test immediately.) My experience with LVM is that only a reboot helps in this situation. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19

[ceph-users] Re: Ceph server

2021-03-12 Thread Robert Sander
Am 10.03.21 um 20:44 schrieb Ignazio Cassano: > 1 small ssd is for operations system and 1 is for mon. Make that a RAID1 set of SSDs and be happier. ;) Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43

[ceph-users] Re: Ceph server

2021-03-12 Thread Robert Sander
0G bonded interfaces in the cluster network? I would assume that you would want to go at least 2x 25G here. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HR

[ceph-users] Re: firewall config for ceph fs client

2021-02-10 Thread Robert Sander
Am 10.02.21 um 15:54 schrieb Frank Schilder: > Which ports are the clients using - if any? All clients only have outgoing connections and do not listen to any ports themselves. The Ceph cluster will not initiate a connection to the client. Kindest Regards -- Robert Sander Heinlein Support G

[ceph-users] Re: firewall config for ceph fs client

2021-02-10 Thread Robert Sander
e cluster. You need ports 3300 and 6789 for the MONs on their IPs and any dynamic port starting at 6800 used by the OSDs. The MDS also uses a port above 6800. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43

[ceph-users] Re: Worst thing that can happen if I have size= 2

2021-02-04 Thread Robert Sander
Hi, Am 04.02.21 um 12:10 schrieb Frank Schilder: > Going to 2+2 EC will not really help On such a small cluster you cannot even use EC because there are not enough independent hosts. As a rule of thumb there should be k+m+1 hosts in a cluster AFAIK. Regards -- Robert Sander Heinlein Supp

[ceph-users] Re: Unable to use ceph command

2021-01-29 Thread Robert Sander
(error connecting to the cluster) This issue is mostly caused by not having a readable ceph.conf and ceph.client.admin.keyring file in /etc/ceph for the user that starts the ceph command. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-su

[ceph-users] Re: Large rbd

2021-01-21 Thread Robert Sander
ogether using lvm or somesuch? What are the tradeoffs? IMHO there are no tradeoffs, there could even be benefits creating a volume group with multiple physical volumes on RBD as the requests can be bettere parallelized (i.e. virtio-single SCSI controller for qemu). Regards -- Robert Sander Heinle

[ceph-users] Python API mon_comand()

2021-01-15 Thread Robert Sander
stored":27410520278,"objects":6781,"kb_used":80382849,"bytes_used":82312036566,"percent_used":0.1416085809469223,"max_avail":166317473792}},{"name":"cephfs_data","id":3,"stats":{"stored":1282414464

[ceph-users] Re: bluefs_buffered_io=false performance regression

2021-01-11 Thread Robert Sander
Hi Marc and Dan, thanks for your quick responses assuring me that we did nothing totally wrong. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B

[ceph-users] bluefs_buffered_io=false performance regression

2021-01-11 Thread Robert Sander
0,88676 0,00338191 true rand 30,1007 82474194304 4194304 1095,92 273 25,5066 313 213 0,05719 0,99140 0,00325295 Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-4

[ceph-users] Re: Clearing contents of OSDs without removing them?

2020-12-19 Thread Robert Sander
ls also removes the objects and you can start new. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschä

[ceph-users] Re: Ceph on ARM ?

2020-11-24 Thread Robert Sander
com.tw/ Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Geschäftsführer: Peer Heinlein -- Sitz:

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-17 Thread Robert Sander
m=2 You need k+m=4 independent hosts for the EC parts, but your CRUSH map only shows two hosts. This is why all your PGs are undersized and degraded. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 40

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-16 Thread Robert Sander
Am 11.11.20 um 13:05 schrieb Hans van den Bogert: > And also the erasure coded profile, so an example on my cluster would be: > > k=2 > m=1 With this profile you can only loose one OSD at a time, which is really not that redundant. Regards -- Robert Sander Heinlein Support GmbH S

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-16 Thread Robert Sander
umber of nodes (more than 10) and a proportional number of OSDs. Mixed HDDs and SSDs in one pool is not good practice as a pool should have OSDs of the same speed. Kindest Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 40

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-11 Thread Robert Sander
t 7 to 10 nodes and a corresponding number of OSDs. This cluster is too small to do any amount of "real" work. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben l

[ceph-users] Re: (Ceph Octopus) Repairing a neglected Ceph cluster - Degraded Data Reduncancy, all PGs degraded, undersized, not scrubbed in time

2020-11-11 Thread Robert Sander
he > ops and repair tasks for the first time here. My condolences. Get the data from that cluster and put the cluster down. In the current setup it will never work. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051

[ceph-users] Re: Does it make sense to have separate HDD based DB/WAL partition

2020-11-03 Thread Robert Sander
t; partition? If you do not have faster devices for DB/WAL there is no need to create them. It does not make the OSD faster. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsge

[ceph-users] Re: Ubuntu 20 with octopus

2020-10-12 Thread Robert Sander
ed on one node, i.e. the distribution must support Docker or podman. cephadm sets up a containerized Ceph cluster with containers based on CentOS. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Re: CephFS user mapping

2020-10-06 Thread Robert Sander
to map that onto user name and group name. What you use for consistent mappings between your CephFS clients is up to you. It could be NIS, libnss-ldap, winbind (Active Directory) or any other method that keeps the passwd and group files in sync. Regards -- Robert Sander Heinlein Support GmbH Schwedt

[ceph-users] Re: Ceph as a distributed filesystem and kerberos integration

2020-10-02 Thread Robert Sander
y User ID locally. The recommended way is to run a Samba cluster using CephFS as backend. Your users would then authenticate against Samba which would need to speak to your LDAP/Kerberos. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-suppor

[ceph-users] Re: Orchestrator cephadm not setting CRUSH weight on OSD

2020-09-29 Thread Robert Sander
ing the OSD and applies it again after deploying an OSD with the same ID. I do not know why the orchestrator does it but there seems to be a fix scheduled for 15.2.5. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051

[ceph-users] Re: Mount CEPH-FS on multiple hosts with concurrent access to the same data objects?

2020-09-22 Thread Robert Sander
? Do you know that Proxmox is able to store VM images as RBD directly in a Ceph cluster? I would not recommend to store VM images as files on CephFS. Or even exporting NFS out of a VM to store other VM images on it. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berl

[ceph-users] Re: What is the advice, one disk per OSD, or multiple disks

2020-09-21 Thread Robert Sander
exception with very fast devices like NVMe where one OSD is not able to fully use the available IO bandwidth. NVMes can have two OSDs per device. But you would not create one OSD over multiple devices. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www

[ceph-users] Re: Spanning OSDs over two drives

2020-09-18 Thread Robert Sander
h one OSD on it. Double the space and the same risk. If you have at least "host" as failure domain then you even have no copies of the same object in one single host. That means it does not matter if you take two OSDs offline at the same time. Regards -- Robert Sander Heinlein Support GmbH

[ceph-users] Orchestrator & ceph osd purge

2020-09-14 Thread Robert Sander
Hi, is it correct that when using the orchestrator to deploy and manage a cluster you should not use "ceph osd purge" any more as the orchestrator then is not able to find the OSD for the "ceph orch osd rm" operation? Regards -- Robert Sander Heinlein Support GmbH Schwed

[ceph-users] Orchestrator cephadm not setting CRUSH weight on OSD

2020-09-10 Thread Robert Sander
0.09799 osd.7up 1.0 1.0 Why does osd.1 have a weight of 0 now? When the OSDs had been initially deployed with the first ceph orch apply command the weights have been correctly set according to their size. Why is there a difference between this process and an OSD

[ceph-users] Re: cephadm & iSCSI

2020-09-04 Thread Robert Sander
Hi, I am using 15.2.4 as .5 has not been released. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer Heinlein - Sitz

[ceph-users] cephadm & iSCSI

2020-09-04 Thread Robert Sander
;Could not load module: %s" % module) rtslib_fb.utils.RTSLibError: Could not load module: iscsi_target_mod Solution: "modprobe iscsi_target_mod" on the host itself, the container is not allowed to do that. Regards -- Robert S

[ceph-users] Re: cephadm grafana url

2020-09-03 Thread Robert Sander
day. ceph versions reports 15.2.4 https://tracker.ceph.com/issues/44877 is exactly the issue I experience. It seems that the fix is in 15.2.5, any chance that this will be released until the weekend? ;) Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.h

[ceph-users] Re: cephadm grafana url

2020-09-03 Thread Robert Sander
Hi, Am 02.09.20 um 23:17 schrieb Dimitri Savineau: > Did you try to restart the dashboard mgr module after your change ? > > # ceph mgr module disable dashboard > # ceph mgr module enable dashboard Yes, I should have mentioned that. No effect, though. Regards -- Robert Sander Hein

[ceph-users] cephadm grafana url

2020-09-02 Thread Robert Sander
://ceph01:3000 root@ceph01:~# ceph dashboard get-grafana-api-url https://ceph01:3000 Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht

[ceph-users] RBD pool damaged, repair options?

2020-08-13 Thread Robert Sander
rbd: error opening vm-501-disk-2: (5) Input/output error What are the options to repair an RBD pool so that at least the RBDs (most data objects are still there) are available again. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de

[ceph-users] Re: Nautilus cluster damaged + crashing OSDs

2020-04-21 Thread Robert Sander
estart the OSD (which resulted in a crash) but the script still is not able to find the info. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 9

[ceph-users] Re: Nautilus cluster damaged + crashing OSDs

2020-04-21 Thread Robert Sander
ate as several PGs are down and/or stale. > Thanks for your input so far. It looks like this issue: https://tracker.ceph.com/issues/36337 We will try to use the linked Python script to repair the OSD. ceph-bluestore-tool repair did not find anything. Regards -- Robert Sander Heinlein Su

[ceph-users] Nautilus cluster damaged + crashing OSDs

2020-04-20 Thread Robert Sander
tracker 0/ 0 objclass 0/ 0 filestore 0/ 0 journal 0/ 0 ms 0/ 0 mon 0/ 0 monc 0/ 0 paxos 0/ 0 tp 0/ 0 auth 0/ 0 crypto 0/ 0 finisher 0/ 0 reserver 0/ 0 heartbeatmap 0/ 0 perfcounter 0/ 0 rgw 1/ 5 rgw_sync 0/ 0 civetweb 0/ 0 javaclient 0/ 0 aso

[ceph-users] Re: remove S3 bucket with rados CLI

2020-04-09 Thread Robert Sander
g ordered to expand this proof of concept setup for backup storage. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenbu

[ceph-users] remove S3 bucket with rados CLI

2020-04-09 Thread Robert Sander
but no write. s3cmd rb … says operation halted and radosgw-admin bucket rm also waits for a healthy cluster. The other option would be to tune the nearfull_ratio and full_ratio temporarily to allow the cluster to use more space, correct? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b

[ceph-users] Re: Resize Bluestore i.e. shrink?

2020-04-04 Thread Robert Sander
ill not work. Ceph does not like swapping. > My question is: Can I shrink a bluestore lvm? No, you need to recreate the OSD. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben

[ceph-users] Re: Netplan bonding configuration

2020-04-01 Thread Robert Sander
ot create a bonding device from VLAN interfaces, you have to make it the other way around, create VLAN interfaces on top of your bonding. Why not bond in LACP mode and use both interface in parallel? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www

[ceph-users] Re: Ceph pool quotas

2020-03-23 Thread Robert Sander
his pool on OSD's. So this is > threshold for bytes_used. Could the documentation be changed to express this more clearly, please? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsang

[ceph-users] Re: default data pools for cephfs: replicated vs. ec

2020-02-26 Thread Robert Sander
w CephFS. There is no inplace migration possible. You also cannot convert a pool from EC to replicated or vice versa. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenb

[ceph-users] next Ceph Meetup Berlin, Germany

2020-02-26 Thread Robert Sander
it, please also do not hesitate to contact me. Kindest Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer Heinlein - Sitz

[ceph-users] S3 Object Lock feature in 14.2.5

2020-01-09 Thread Robert Sander
something here? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer Heinlein - Sitz: Berlin signature.asc Description

[ceph-users] Re: Mimic 13.2.8 deep scrub error: "size 333447168 > 134217728 is too large"

2020-01-03 Thread Robert Sander
s cluster, it's only now showing up > as a warning. Thanks Paul, I already thought that this would be the case. We will recommend using CephFS to our customer for storing ISOs. I really do not know why they are storing them as single RADOS objects. Regards -- Robert Sander Heinlein Support Gmb

[ceph-users] Mimic 13.2.8 deep scrub error: "size 333447168 > 134217728 is too large"

2020-01-02 Thread Robert Sander
cts All OSDs are BlueStore. What is happening here? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer Heinlein - S

[ceph-users] Re: FUSE X kernel mounts

2019-11-25 Thread Robert Sander
ment process. But has better performance. If you have a recent kernel, you can use the kernel mount. If you have an enterprise distribution with older kernel, use FUSE for current features. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.

[ceph-users] Re: dashboard not working

2019-09-17 Thread Robert Sander
ot start without a certificate. Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin https://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Amtsgericht Berlin-Charlottenburg - HRB 93818 B Geschäftsführer: Peer Heinlein - Sitz: Berlin signature.asc Desc

[ceph-users] Re: Unable to replace OSDs deployed with ceph-volume lvm batch

2019-09-09 Thread Robert Sander
mand line argument. Otherwise ceph-volume thinks it is a "real" device. ceph-volume lvm create --bluestore --data /dev/sda --block.db ceph-block-dbs-ea684aa8-544e-4c4a-8664-6cb50b3116b8/osd-block-db-a8f1489a-d97b-479e-b9a7-30fc9fa99cb5 should work. Regards -- Robert Sander Heinlein Sup

[ceph-users] ceph-iscsi and tcmu-runner RPMs for CentOS?

2019-09-07 Thread Robert Sander
cmu-runner nor ceph-iscsi. Where do I get these RPMs from? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 / 405051-19 Zwangsangaben lt. §35a GmbHG: HRB 93818 B / Amtsgericht Berlin-Charlottenburg, Gesch

<    1   2   3