Hi,
Is somebody using LRC plugin ?
I came to the conclusion that LRC k=9, m=3, l=4 is not the same as
jerasure k=9, m=6 in terms of protection against failures and that I
should use k=9, m=6, l=5 to get a level of resilience >= jerasure k=9,
m=6. The example in the documentation (k=4, m=2, l
Perhaps this option triggered the crush map change:
osd crush update on start
Each time the OSD starts, it verifies it is in the correct location in
the CRUSH map and, if it is not, it moves itself.
https://docs.ceph.com/en/quincy/rados/operations/crush-map/
Joachim
Johan Hattne schrieb am
We have a ceph cluster with a cephfs filesystem that we use mostly for
backups. When I do a "ceph -s" or a "ceph df", it reports lots of space:
data:
pools: 3 pools, 4104 pgs
objects: 1.09 G objects, 944 TiB
usage: 1.5 PiB used, 1.0 PiB / 2.5 PiB avail
GLOBAL:
SI
Hi,
We currently use Ceph Pacific 16.2.10 deployed with Cephadm on this
storage cluster. Last night, one of our OSD died. However, since its
storage is a SSD, we ran hardware checks and found no issue with the SSD
itself. However, if we try starting the service again, the container
just cras
I think this is resolved—and you're right about the 0-weight of the root
bucket being strange. I had created the rack buckets with
# ceph osd crush add-bucket rack-0 rack
whereas I should have used something like
# ceph osd crush add-bucket rack-0 rack root=default
There's a bit in the docume
I guess this is related to your crush rules..
Unfortunaly i dont know much about creating the rules...
But someone cloud give more insights when you also provide
crush rule dump
your "-1 0 root default" is a bit strange
Am 1. April 2023 01:01:39 MESZ schrieb Johan Hattne :
>Here goes:
>
>
ceph-users@ceph.io
stop
From: Thomas Widhalm
Sent: Wednesday, April 5, 2023 7:26 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: quincy v17.2.6 QE Validation status
Sorry for interfereing, but: Wh!! Thank you so much for the
great work. Can't wait f
d...@ceph.io
stop
From: Thomas Widhalm
Sent: Wednesday, April 5, 2023 7:26 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: quincy v17.2.6 QE Validation status
Sorry for interfereing, but: Wh!! Thank you so much for the
great work. Can't wait for the
Hi,
We're currently testing out lua scripting in the Ceph Object Gateway
(Radosgw).
Ceph version: 17.2.5
We've tried a simple experiment with the simple lua script which is based
on the documentation (see fixed width text below).
However, the issue we're having is that we can't find the log mes
Thx, Josh!
We will start releasing now.
Release notes - https://github.com/ceph/ceph/pull/50721
On Wed, Apr 5, 2023 at 7:16 AM Josh Durgin wrote:
> The LRC upgraded with no problems, so this release is good to go!
>
> Josh
>
> On Mon, Apr 3, 2023 at 3:36 PM Yuri Weinstein wrote:
>
>> Josh, th
On Fri, Mar 17, 2023 at 1:56 AM Ashu Pachauri wrote:
>
> Hi Xiubo,
>
> As you have correctly pointed out, I was talking about the stipe_unit
> setting in the file layout configuration. Here is the documentation for
> that for anyone else's reference:
> https://docs.ceph.com/en/quincy/cephfs/file-l
Thank you for the suggestion Frank. We've managed to avoid patches so far,
but I guess that convenience ends now :(
With
# lsblk -P -p -o 'NAME' | wc -l
137
it takes about 10 minutes to run. 70 probably would also bring you over the
2 minute timeout window, so I certainly wouldn't consider updating
Sorry for interfereing, but: Wh!! Thank you so much for the
great work. Can't wait for the release with a good chance to get access
to my data again.
On 05.04.23 16:15, Josh Durgin wrote:
The LRC upgraded with no problems, so this release is good to go!
Josh
On Mon, Apr 3, 2023 at 3:
The LRC upgraded with no problems, so this release is good to go!
Josh
On Mon, Apr 3, 2023 at 3:36 PM Yuri Weinstein wrote:
> Josh, the release is ready for your review and approval.
>
> Adam, can you please update the LRC upgrade to 17.2.6 RC?
>
> Thx
>
>
> On Wed, Mar 29, 2023 at 3:07 PM Yuri
Hi Boris,
debug log showed that the problem was that the customer
accidentally misconfigured placement_targets and default_placement in
zonegroup configuration which caused access denied issues during bucket
creation.
This is what was found in debug logs:
s3:create_bucket user not permitted to u
Hello, for my multisite configuration, i create and use an root pool other than
.rgw.root to store realm and zone configuration with the follow option:
rgw_realm_root_pool=myzone.rgw.root
rgw_zonegroup_root_pool=myzone.rgw.root
rgw_zone_root_pool=myzone.rgw.root
But , i can see during the syncron
Hi Mikael, thanks for sharing this (see also
https://www.stroustrup.com/whitespace98.pdf, python ha ha ha). We would
probably have observed the same problem (70+ OSDs per host). You might want to
consider configuring deployment against a local registry and use a patched
image. Local container i
On Wed, Apr 05, 2023 at 01:18:57AM +0200, Mikael Öhman wrote:
Trying to upgrade a containerized setup from 16.2.10 to 16.2.11 gave us two
big surprises, I wanted to share in case anyone else encounters the same. I
don't see any nice solution to this apart from a new release that fixes the
perform
18 matches
Mail list logo