[ceph-users] Re: quay.io image no longer existing, required for node add to repair cluster

2022-02-25 Thread Kai Börnert
you can use upgrade to make sure every daemon is on the same image. - Adam King On Fri, Feb 25, 2022 at 10:06 AM Kai Börnert wrote: Hi, what would be the correct way to move forward? I have a 3 node cephadm installed cluster, one node died, the other two are fine and

[ceph-users] quay.io image no longer existing, required for node add to repair cluster

2022-02-25 Thread Kai Börnert
Hi, what would be the correct way to move forward? I have a 3 node cephadm installed cluster, one node died, the other two are fine and work as expected, so no data loss, but a lot of remapped/degraded. The dead node was replaced and I wanted to add it to the cluster using "ceph orch host

[ceph-users] Re: *****SPAM***** Direct disk/Ceph performance

2022-01-16 Thread Kai Börnert
Hi, to have a fair test you need to replicate the power loss scenarios ceph does cover and you are currently not: No memory caches in the os or an the disk are allowed to be used, ceph has to ensure that an object written is actually written, even if a node of your cluster explodes right at

[ceph-users] Re: Dashboard's website hangs during loading, no errors

2021-11-18 Thread Kai Börnert
Hi, do you use more nodes than deployed mgrs and cephadm? If so it might be, that the node you are connecting to no longer has a instance of the mgr running, and you only getting some leftovers in the browser cache? At least this was happening in my test cluster, but I was always able to

[ceph-users] Re: Performance optimization

2021-09-06 Thread Kai Börnert
Hi, are any of those old disks SMR ones? Because they will absolutely destroy any kind of performance (ceph does not use writecaches due to powerloss concerns, so they kinda do their whole magic for each writerequest). Greetings On 9/6/21 10:47 AM, Simon Sutter wrote: Hello everyone! I

[ceph-users] Re: SATA vs SAS

2021-08-22 Thread Kai Börnert
As far as i understand, more important factor (for the ssds) is if they have power loss protections (so they can use their ondevice write cache) and how many iops they have when using direct writes with queue depth 1 I just did a test for a hdd with block.db on ssd cluster using extra cheap

[ceph-users] Re: Can we deprecate FileStore in Quincy?

2021-06-28 Thread Kai Börnert
If you want to go cheap and somewhat questionable, there are some asrock mainboards with a soldered in atom cpu, that support up to 32gb memory (officially only 8, but the controller does more) and have 2 sata directly + a free 16x pcie port, Those boards are usually less than 90€, not as

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-21 Thread Kai Börnert
I think the primary goal of a container environments are resource isolation. At least when I read about the history I never read anything about a tool for people to skip to learn something. Containers allow to use mixed version of the same dependency, despite being a shared dependency, doing

[ceph-users] Re: Why you might want packages not containers for Ceph deployments

2021-06-20 Thread Kai Börnert
Because all of this reads way to negative regarding containers for me I wanted to give a different perspective. Coming from a day to day job, that heavily utilizes kubernetes for its normal environment, I found cephadm quite like a godsent, instead of having to deal with a lot of pesky

[ceph-users] Re: orch upgrade mgr starts too slow and is terminated?

2021-05-07 Thread Kai Börnert
10:42 schrieb Kai Börnert: Hi, thanks for your explanation. I see the 16.2.3 build is published, but (of course) I cannot update to it, as the version I currently use does still have this bug. Is there some workaround/hack I can use to upgrade? I had success with stopping the "looping

[ceph-users] Re: orch upgrade mgr starts too slow and is terminated?

2021-05-07 Thread Kai Börnert
missed one case.) sage On Thu, May 6, 2021 at 9:59 AM Kai Börnert wrote: Hi all, upon updating to 16.2.2 via cephadm the upgrade is being stuck on the first mgr Looking into this via docker logs I see that it is still loading modules when it is apparently terminated and restarted in a loop

[ceph-users] orch upgrade mgr starts too slow and is terminated?

2021-05-06 Thread Kai Börnert
Hi all, upon updating to 16.2.2 via cephadm  the upgrade is being stuck on the first mgr Looking into this via docker logs I see that it is still loading modules when it is apparently terminated and restarted in a loop When pausing the update, the mgr succeeds to start with the new

[ceph-users] Monitor dissapears/stopped after testing monitor-host loss and recovery

2021-04-14 Thread Kai Börnert
Hi, I'm currently testing some disaster scenarios. When removing one osd/monitor host, I see that a new quorum is build without the missing host. The missing host is listed in the dashboard under Not In Quorum, so probably everything as expected. After restarting the host, I see that the

[ceph-users] Is metadata on SSD or bluestore cache better?

2021-04-04 Thread Kai Börnert
Hi, I hope this mailgroup is ok for this kind of questions, if not please ignore. I'm currently in the process of planning a smaller ceph cluster mostly for cephfs use. The budget still allows for some SSD's in addition to the required harddisks. I see two options on how to use  those,