[ceph-users] Re: Free space in ec-pool should I worry?

2021-11-01 Thread Anthony D'Atri
I think this thread has inadvertantly conflated the two. Balancer: ceph-mgr module that uses pg-upmap to balance OSD utilization / fullness Autoscaler: attempts to set pg_num / pgp_num for each pool adaptively > > The balancer does a pretty good job. It's the PG autoscaler that has bitten

[ceph-users] Re: OSD spend too much time on "waiting for readable" -> slow ops -> laggy pg -> rgw stop -> worst case osd restart

2021-11-01 Thread Sage Weil
Hi Manuel, I'm looking at the ticket for this issue ( https://tracker.ceph.com/issues/51463) and tried to reproduce. This was initially trivial to do with vstart (rados bench paused for many seconds afters stopping an osd) but it turns out that was because the vstart ceph.conf includes

[ceph-users] Re: Free space in ec-pool should I worry?

2021-11-01 Thread David Orman
The balancer does a pretty good job. It's the PG autoscaler that has bitten us frequently enough that we always ensure it is disabled for all pools. David On Mon, Nov 1, 2021 at 2:08 PM Alexander Closs wrote: > I can add another 2 positive datapoints for the balancer, my personal and > work

[ceph-users] Re: Free space in ec-pool should I worry?

2021-11-01 Thread Alexander Closs
I can add another 2 positive datapoints for the balancer, my personal and work clusters are both happily balancing. Good luck :) -Alex On 11/1/21, 3:05 PM, "Josh Baergen" wrote: Well, those who have negative reviews are often the most vocal. :) We've had few, if any, problems with

[ceph-users] Re: Free space in ec-pool should I worry?

2021-11-01 Thread Alexander Closs
Max available = free space actually usable now based on OSD usage, not including already-used space. -Alex MIT CSAIL On 11/1/21, 2:18 PM, "Szabo, Istvan (Agoda)" wrote: It says max available: 115TB and current use is 104TB, what I don’t understand where the max available come from

[ceph-users] Re: Free space in ec-pool should I worry?

2021-11-01 Thread Etienne Menguy
Hi, Why do you think it’s used at 91%? Ceph reports 47.51% usage for this pool. - Etienne Menguy etienne.men...@croit.io > On 1 Nov 2021, at 18:03, Szabo, Istvan (Agoda) wrote: > > Hi, > > Theoretically my data pool is on 91% used but the fullest osd is on 60%, > should In” worry? > >

[ceph-users] Re: Pg autoscaling and device_health_metrics pool pg sizing

2021-11-01 Thread Yury Kirsanov
Hi Alex, Switch autoscaler to 'scale-up' profile, it will keep PGs at minimum and increase them as required. Default one is 'scale-down'. Regards, Yury. On Tue, Nov 2, 2021 at 3:31 AM Alex Petty wrote: > Hello, > > I’m evaluating Ceph as a storage option, using ceph version 16.2.6, > Pacific

[ceph-users] Pg autoscaling and device_health_metrics pool pg sizing

2021-11-01 Thread Alex Petty
Hello, I’m evaluating Ceph as a storage option, using ceph version 16.2.6, Pacific stable installed using cephadm. I was hoping to use PG autoscaling to reduce ops efforts. I’m standing this up on a cluster with 96 OSDs across 9 hosts. The device_health_metrics pool was created by Ceph

[ceph-users] Re: Performance degredation with upgrade from Octopus to Pacific

2021-11-01 Thread Igor Fedotov
Then highly likely you're bitten by https://tracker.ceph.com/issues/52089 This has been fixed starting 16.2.6. So please update or wait for a bit till 16.2.7 is release which is gonna to happend shortly. Thanks, Igor On 11/1/2021 7:25 PM, Dustin Lagoy wrote: I am running a cephadm base

[ceph-users] Re: Performance degredation with upgrade from Octopus to Pacific

2021-11-01 Thread Igor Fedotov
Hey Dustin, what Pacific version have you got? Thanks, Igor On 11/1/2021 7:08 PM, Dustin Lagoy wrote: Hi everyone, This is my first time posting here, so it's nice to meet you all! I have a Ceph cluster that was recently upgraded from Octopus to Pacific and now the write performance is

[ceph-users] Re: bluestore zstd compression questions

2021-11-01 Thread Igor Fedotov
On 10/29/2021 1:06 PM, Elias Abacioglu wrote: I don't have any data yet. I set up a k8s cluster and set up CephFS, RGW and RBD for k8s.. So it's hard to tell beforehand what we will store and know compression ratios. Thus making it hard to know how to benchmark, but I guess a mix of everything

[ceph-users] Re: Upgrade to 16.2.6 and osd+mds crash after bluestore_fsck_quick_fix_on_mount true

2021-11-01 Thread Igor Fedotov
Hi Thilo, theoretically this is a recoverable case - due to the bug new prefix was inserted at the beginning of every OMAP record instead of replacing old one. So one has to just remove an old prefix to fix that (to-be-removed prefix starts after the first '.' char and ends with the second

[ceph-users] Re: NFS Ganesha Active Active Question

2021-11-01 Thread Daniel Gryniewicz
You can fail from one running Ganesha to another, using something like ctdb or pacemaker/corosync. This is how some other clustered filesysytem (e.g. Gluster) use Ganesha. This is not how the Ceph community has decided to implement HA with Ganesha, so it will be a more manual setup for you,

[ceph-users] Re: Ceph-Dokan Mount Caps at ~1GB transfer?

2021-11-01 Thread Radoslav Milanov
Have you tries this with the native client under Linux ? It could be just slow cephfs ? On 1.11.2021 г. 06:40 ч., Mason-Williams, Gabryel (RFI,RAL,-) wrote: Hello, We have been trying to use Ceph-Dokan to mount cephfs on Windows. When transferring any data below ~1GB the transfer speed is as

[ceph-users] Ceph-Dokan Mount Caps at ~1GB transfer?

2021-11-01 Thread Mason-Williams, Gabryel (RFI,RAL,-)
Hello, We have been trying to use Ceph-Dokan to mount cephfs on Windows. When transferring any data below ~1GB the transfer speed is as quick as desired and works perfectly. However, once more than ~1GB has been transferred the connection stops being able to send data and everything seems to