[ceph-users] Re: Inherited CEPH nightmare

2022-10-12 Thread Anthony D'Atri
I agree with Janne’s thoughts here, especially since you’re on SSDs. > On Oct 12, 2022, at 03:38, Janne Johansson wrote: > >> I've changed some elements of the config now and the results are much better >> but still quite poor relative to what I would consider normal SSD >> performance. >

[ceph-users] Re: Inherited CEPH nightmare

2022-10-12 Thread Janne Johansson
> I've changed some elements of the config now and the results are much better > but still quite poor relative to what I would consider normal SSD performance. The number of PGs has been increased from 128 to 256. Not yet run JJ Balancer. > In terms of performance, I measured the time it takes

[ceph-users] Re: Inherited CEPH nightmare

2022-10-11 Thread Tino Todino
Hi Janne, I've changed some elements of the config now and the results are much better but still quite poor relative to what I would consider normal SSD performance. The osd_memory_target is now set to 12GB for 3 of the 4 hosts (each of these hosts has 1.5TB RAM so I can allocate loads if

[ceph-users] Re: Inherited CEPH nightmare

2022-10-10 Thread Janne Johansson
> osd_memory_target = 2147483648 > > Based on some reading, I'm starting to understand a little about what can be > tweaked. For example, I think the osd_memory_target looks low. I also think > the DB/WAL should be on dedicated disks or partitions, but have no idea what > procedure

[ceph-users] Re: Inherited CEPH nightmare

2022-10-07 Thread Josef Johansson
Hi, You want to also check disk_io_weighted via some kind of metric system. That will detect which SSDs that are hogging the systems, if there are any specific ones. Also check their error levels and endurance. On Fri, 7 Oct 2022 at 17:05, Stefan Kooman wrote: > On 10/7/22 16:56, Tino Todino

[ceph-users] Re: Inherited CEPH nightmare

2022-10-07 Thread Robert Sander
Hi Tino, Am 07.10.22 um 16:56 schrieb Tino Todino: I know some of these are consumer class, but I'm working on replacing these. This would be your biggest issue. SSD performance can vary drastically. Ceph needs "multi-use" enterprise SSDs, not read-optimized consumer ones. All 4 hosts are

[ceph-users] Re: Inherited CEPH nightmare

2022-10-07 Thread Stefan Kooman
On 10/7/22 16:56, Tino Todino wrote: Hi folks, The company I recently joined has a Proxmox cluster of 4 hosts with a CEPH implementation that was set-up using the Proxmox GUI. It is running terribly, and as a CEPH newbie I'm trying to figure out if the configuration is at fault. I'd really