[ceph-users] Re: EC Profiles & DR

2023-12-05 Thread Danny Webb
sort of. It means you can lose 2 and have no data loss​. But ceph will do it's best to protect you from dataloss by offlining the pool until the required number of chunks is up. See min_size here: https://docs.ceph.com/en/latest/rados/operations/pools/ From:

[ceph-users] Re: EC Profiles & DR

2023-12-05 Thread Danny Webb
Usually EC requires at least k+1 to be up and active for the pool to be working. Setting the min value to k risks dataloss. From: duluxoz Sent: 05 December 2023 09:01 To: rivera.davi...@gmail.com ; matt...@peregrineit.net Cc: ceph-users@ceph.io Subject:

[ceph-users] Re: Seagate Exos power settings - any experiences at your sites?

2023-11-08 Thread Danny Webb
We've had some issues with Exos drives dropping out of our sas controllers (LSI SAS3008 PCI-Express Fusion-MPT SAS-3) intermittently which we believe is due to this. Upgrading the drive firmware largely solved it for us so we never ended up messing about with the power settings.

[ceph-users] Re: Manual resharding with multisite

2023-10-09 Thread Danny Webb
This only works if you reshard on the primary zone. Like Yixin, we've tried resharding on the primary zone where data is held on a secondary zone and all that results in is a complete loss of all index data for the reshardd bucket on the secondary zone. The only way to use multisite

[ceph-users] Re: RBD Disk Usage

2023-08-07 Thread Danny Webb
worth also mentioning that there are several ways to discard data (automatically, timed, manually) all with their own caveats. We find it's easiest to simply mount with the discard option and take the penalty up front on deletion. Redhat has a good explanation of all the options:

[ceph-users] Re: PG backfilled slow

2023-07-26 Thread Danny Webb
The Suse docs are pretty good for this: https://www.suse.com/support/kb/doc/?id=19693 basically up the osd-max-backfills / osd-recovery-max-active and this will allow concurrent backfills to the same device. If you watch the OSD in grafana you should be able to see the underlying device

[ceph-users] Bucket resharding in multisite without data replication

2023-06-08 Thread Danny Webb
region results in the remote bucket losing it's ability to list contents (seemingly breaking the index in the remote region). Is there a way (besides waiting for reef and dynamic bucket resharding for multisite) to reshard buckets in this setup? Cheers, Danny Danny Webb Principal OpenStack

[ceph-users] Re: rgw service fails to start with zone not found

2023-05-08 Thread Danny Webb
are the old multisite conf values still in ceph.conf (eg, rgw_zonegroup, rgw_zone, rgw_realm)? From: Adiga, Anantha Sent: 08 May 2023 18:27 To: ceph-users@ceph.io Subject: [ceph-users] rgw service fails to start with zone not found CAUTION: This email

[ceph-users] Re: Set the Quality of Service configuration.

2023-04-02 Thread Danny Webb
for RBD workloads you can set QOS values on a per image basis (and maybe on an entire pool basis): https://docs.ceph.com/en/latest/rbd/rbd-config-ref/#qos-settings i'm not sure if you can do so for other workloads. From: farhad kh Sent: 02 April 2023 15:07 To:

[ceph-users] Re: EC profiles where m>k (EC 8+12)

2023-03-24 Thread Danny Webb
-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io Danny Webb Principal OpenStack Engineer danny.w...@thehutgroup.com [THG Ingenuity Logo]<https://www.thg.com> [https://i.imgur.com/wbpVRW6.png]<https://www.linkedin.com/company/thgplc/?origi

[ceph-users] Re: Flapping OSDs on pacific 16.2.10

2023-01-18 Thread Danny Webb
Administrateur système Openstack sénior PlanetHoster inc. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io Danny Webb Principal OpenStack Engineer danny.w...@thehutgroup.com [THG Ingenuity Logo

[ceph-users] Re: How to check available storage with EC and different sized OSD's ?

2022-11-09 Thread Danny Webb
k=2 m=1 plugin=jerasure technique=reed_sol_van w=8 Paweł W dniu 8.11.2022 o 15:47, Danny Webb pisze: > with a m value of 1 if you lost a single OSD/failure domain you'd end up with > a read only pg or cluster. usually you need at least k+1 to survive a > failure domain failure

[ceph-users] Re: How to check available storage with EC and different sized OSD's ?

2022-11-08 Thread Danny Webb
rs@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io Danny Webb Principal OpenStack Engineer The Hut Group<http://www.thehutgroup.com/> Tel: Email: danny.w...@thehutgroup.com<mailto:danny.w...@thehutgroup.com> For the purposes of this email, the "company" means T

[ceph-users] Compression stats on passive vs aggressive

2022-09-07 Thread Danny Webb
by bucket basis with radosgw-admin bucket stats. Is there any reason why compression stats don't come through in the second scenario on the pool level? Am testing in a virtual lab with v6.0.8-stable-6.0-pacific Cheers, Danny Danny Webb Principal OpenStack Engineer The Hut Group<h

[ceph-users] Re: Advice to create a EC pool with 75% raw capacity usable

2022-09-07 Thread Danny Webb
To unsubscribe send an email to ceph-users-le...@ceph.io Danny Webb Principal OpenStack Engineer The Hut Group<http://www.thehutgroup.com/> Tel: Email: danny.w...@thehutgroup.com<mailto:danny.w...@thehutgroup.com> For the purposes of this email, the "company" means The Hut Gro

[ceph-users] Re: RadosGW compression vs bluestore compression

2022-08-25 Thread Danny Webb
Hi Konstantin, https://docs.ceph.com/en/latest/radosgw/compression/ vs say: https://www.redhat.com/en/blog/red-hat-ceph-storage-33-bluestore-compression-performance Cheers, Danny From: Konstantin Shalygin Sent: 25 August 2022 13:23 To: Danny Webb Cc: ceph

[ceph-users] RadosGW compression vs bluestore compression

2022-08-21 Thread Danny Webb
Hi, What is the difference between using rgw compression vs enabling compression on a pool? Is there any reason why you'd use one over the other for the data pool of a zone? Cheers, Danny Danny Webb Principal OpenStack Engineer The Hut Group<http://www.thehutgroup.com/> Tel: Email: d

[ceph-users] Re: Default erasure code profile not working for 3 node cluster?

2022-07-25 Thread Danny Webb
ered in Scotland, with registration number SC005336. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io Danny Webb Principal OpenStack Engineer The Hut Group<http://www.thehutgroup.com/> Tel: Email: da

[ceph-users] Multisite upgrade ordering

2022-06-10 Thread Danny Webb
site and then do the secondary sites? Cheers, Danny Danny Webb Principal OpenStack Engineer The Hut Group<http://www.thehutgroup.com/> Tel: Email: danny.w...@thehutgroup.com<mailto:danny.w...@thehutgroup.com> For the purposes of this email, the "company" means The Hut Gro