sort of. It means you can lose 2 and have no data loss. But ceph will do
it's best to protect you from dataloss by offlining the pool until the required
number of chunks is up. See min_size here:
https://docs.ceph.com/en/latest/rados/operations/pools/
From:
Usually EC requires at least k+1 to be up and active for the pool to be
working. Setting the min value to k risks dataloss.
From: duluxoz
Sent: 05 December 2023 09:01
To: rivera.davi...@gmail.com ;
matt...@peregrineit.net
Cc: ceph-users@ceph.io
Subject:
We've had some issues with Exos drives dropping out of our sas controllers (LSI
SAS3008 PCI-Express Fusion-MPT SAS-3) intermittently which we believe is due to
this. Upgrading the drive firmware largely solved it for us so we never ended
up messing about with the power settings.
This only works if you reshard on the primary zone. Like Yixin, we've tried
resharding on the primary zone where data is held on a secondary zone and all
that results in is a complete loss of all index data for the reshardd bucket on
the secondary zone. The only way to use multisite
worth also mentioning that there are several ways to discard data
(automatically, timed, manually) all with their own caveats. We find it's
easiest to simply mount with the discard option and take the penalty up front
on deletion. Redhat has a good explanation of all the options:
The Suse docs are pretty good for this:
https://www.suse.com/support/kb/doc/?id=19693
basically up the osd-max-backfills / osd-recovery-max-active and this will
allow concurrent backfills to the same device. If you watch the OSD in grafana
you should be able to see the underlying device
region results in the remote bucket losing
it's ability to list contents (seemingly breaking the index in the remote
region).
Is there a way (besides waiting for reef and dynamic bucket resharding for
multisite) to reshard buckets in this setup?
Cheers,
Danny
Danny Webb
Principal OpenStack
are the old multisite conf values still in ceph.conf (eg, rgw_zonegroup,
rgw_zone, rgw_realm)?
From: Adiga, Anantha
Sent: 08 May 2023 18:27
To: ceph-users@ceph.io
Subject: [ceph-users] rgw service fails to start with zone not found
CAUTION: This email
for RBD workloads you can set QOS values on a per image basis (and maybe on an
entire pool basis):
https://docs.ceph.com/en/latest/rbd/rbd-config-ref/#qos-settings
i'm not sure if you can do so for other workloads.
From: farhad kh
Sent: 02 April 2023 15:07
To:
-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Danny Webb
Principal OpenStack Engineer
danny.w...@thehutgroup.com
[THG Ingenuity Logo]<https://www.thg.com>
[https://i.imgur.com/wbpVRW6.png]<https://www.linkedin.com/company/thgplc/?origi
Administrateur système Openstack sénior
PlanetHoster inc.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Danny Webb
Principal OpenStack Engineer
danny.w...@thehutgroup.com
[THG Ingenuity Logo
k=2
m=1
plugin=jerasure
technique=reed_sol_van
w=8
Paweł
W dniu 8.11.2022 o 15:47, Danny Webb pisze:
> with a m value of 1 if you lost a single OSD/failure domain you'd end up with
> a read only pg or cluster. usually you need at least k+1 to survive a
> failure domain failure
rs@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Danny Webb
Principal OpenStack Engineer
The Hut Group<http://www.thehutgroup.com/>
Tel:
Email: danny.w...@thehutgroup.com<mailto:danny.w...@thehutgroup.com>
For the purposes of this email, the "company" means T
by bucket basis with
radosgw-admin bucket stats. Is there any reason why compression stats don't
come through in the second scenario on the pool level?
Am testing in a virtual lab with v6.0.8-stable-6.0-pacific
Cheers,
Danny
Danny Webb
Principal OpenStack Engineer
The Hut Group<h
To unsubscribe send an email to ceph-users-le...@ceph.io
Danny Webb
Principal OpenStack Engineer
The Hut Group<http://www.thehutgroup.com/>
Tel:
Email: danny.w...@thehutgroup.com<mailto:danny.w...@thehutgroup.com>
For the purposes of this email, the "company" means The Hut Gro
Hi Konstantin,
https://docs.ceph.com/en/latest/radosgw/compression/
vs say:
https://www.redhat.com/en/blog/red-hat-ceph-storage-33-bluestore-compression-performance
Cheers,
Danny
From: Konstantin Shalygin
Sent: 25 August 2022 13:23
To: Danny Webb
Cc: ceph
Hi,
What is the difference between using rgw compression vs enabling compression on
a pool? Is there any reason why you'd use one over the other for the data pool
of a zone?
Cheers,
Danny
Danny Webb
Principal OpenStack Engineer
The Hut Group<http://www.thehutgroup.com/>
Tel:
Email: d
ered in Scotland, with
registration number SC005336.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Danny Webb
Principal OpenStack Engineer
The Hut Group<http://www.thehutgroup.com/>
Tel:
Email: da
site and then do the
secondary sites?
Cheers,
Danny
Danny Webb
Principal OpenStack Engineer
The Hut Group<http://www.thehutgroup.com/>
Tel:
Email: danny.w...@thehutgroup.com<mailto:danny.w...@thehutgroup.com>
For the purposes of this email, the "company" means The Hut Gro
19 matches
Mail list logo