notes from today's meeting of the Ceph Leadership Team
Squid 19.2.0 release
Laura successfully built/signed prerelease packages
LRC upgrade was the last step, but hit iscsi failures again
* Ilya to create a tracker issue and tag Dan/Xiubo for details
* Ilya to reschedule rbd
On Fri, Jul 19, 2024 at 9:04 AM Stefan Kooman wrote:
>
> Hi,
>
> On 12-07-2024 00:27, Yuri Weinstein wrote:
>
> ...
>
> > * For packages, see https://docs.ceph.com/en/latest/install/get-packages/
>
> I see that only packages have been build for Ubuntu 22.04 LTS. Will
> there also be packages built
--bucket={bucket_name}` to fix up the bucket object count and object
> sizes at the end
>
> This process takes quite some time and I can't say if it's 100%
> perfect but it enabled us to get to a state where we could delete the
> buckets and clean up the objects.
> I h
but the secondary zone isn't processing them in this case
>
> Thank you
>
>
>
> From: Casey Bodley
> Sent: Tuesday, July 9, 2024 10:39 PM
> To: Szabo, Istvan (Agoda)
> Cc: Eugen Block ; ceph-users@ceph.io
> Subject: Re: [ceph-users]
in general, these omap entries should be evenly spread over the
bucket's index shard objects. but there are two features that may
cause entries to clump on a single shard:
1. for versioned buckets, multiple versions of the same object name
map to the same index shard. this can become an issue if a
this was discussed in the ceph leadership team meeting yesterday, and
we've agreed to re-number this release to 18.2.4
On Wed, Jul 3, 2024 at 1:08 PM wrote:
>
>
> On Jul 3, 2024 5:59 PM, Kaleb Keithley wrote:
> >
> >
> >
>
> > Replacing the tar file is problematic too, if only because it's a pot
(cc Thomas Goirand)
in April, an 18.2.3 tarball was uploaded to
https://download.ceph.com/tarballs/ceph_18.2.3.orig.tar.gz. that's been
picked up and packaged by the Debian project under the assumption that it
was a supported release
when we do finally release 18.2.3, we will presumably overwrite
On Mon, Jul 1, 2024 at 10:23 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/66756#note-1
>
> Release Notes - TBD
> LRC upgrade - TBD
>
> (Reruns were not done yet.)
>
> Seeking approvals/reviews for:
>
> smoke
> rados - Radek, Laura
>
hi Adam,
On Thu, Jun 27, 2024 at 4:41 AM Adam Prycki wrote:
>
> Hello,
>
> I have a question. Do people use rgw lifecycle policies in production?
> I had big hopes for this technology bug in practice it seems to be very
> unreliable.
>
> Recently I've been testing different pool layouts and using
# quincy now past estimated 2024-06-01 end-of-life
will 17.2.8 be the last point release? maybe not, depending on timing
# centos 8 eol
* Casey tried to summarize the fallout in
https://lists.ceph.io/hyperkitty/list/d...@ceph.io/thread/H7I4Q4RAIT6UZQNPPZ5O3YB6AUXLLAFI/
* c8 builds were disabled
On Thu, May 23, 2024 at 11:50 AM Szabo, Istvan (Agoda)
wrote:
>
> Hi,
>
> Wonder what is the best practice to scale RGW, increase the thread numbers or
> spin up more gateways?
>
>
> *
> Let's say I have 21000 connections on my haproxy
> *
> I have 3 physical gateway servers so let's say each
On Fri, Apr 12, 2024 at 2:38 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/65393#note-1
> Release Notes - TBD
> LRC upgrade - TBD
>
> Seeking approvals/reviews for:
>
> smoke - infra issues, still trying, Laura PTL
>
> rados - Radek,
unfortunately, this cloud sync module only exports data from ceph to a
remote s3 endpoint, not the other way around:
"This module syncs zone data to a remote cloud service. The sync is
unidirectional; data is not synced back from the remote zone."
i believe that rclone supports copying from one s
On Wed, Apr 3, 2024 at 3:09 PM Lorenz Bausch wrote:
>
> Hi Casey,
>
> thank you so much for analysis! We tested the upgraded intensively, but
> the buckets in our test environment were probably too small to get
> dynamically resharded.
>
> > after upgrading to the Quincy release, rgw would
> > loo
object names when trying to list those buckets. 404
NoSuchKey is the response i would expect in that case
On Wed, Apr 3, 2024 at 12:20 PM Casey Bodley wrote:
>
> On Wed, Apr 3, 2024 at 11:58 AM Lorenz Bausch wrote:
> >
> > Hi everybody,
> >
> > we upgraded our contain
On Wed, Apr 3, 2024 at 11:58 AM Lorenz Bausch wrote:
>
> Hi everybody,
>
> we upgraded our containerized Red Hat Pacific cluster to the latest
> Quincy release (Community Edition).
i'm afraid this is not an upgrade path that we try to test or support.
Red Hat makes its own decisions about what to
Ubuntu 22.04 packages are now available for the 17.2.7 Quincy release.
The upcoming Squid release will not support Ubuntu 20.04 (Focal
Fossa). Ubuntu users planning to upgrade from Quincy to Squid will
first need to perform a distro upgrade to 22.04.
Getting Ceph
* Git at git://githu
anything we can do to narrow down the policy issue here? any of the
Principal, Action, Resource, or Condition matches could be failing
here. you might try replacing each with a wildcard, one at a time,
until you see the policy take effect
On Wed, Dec 13, 2023 at 5:04 AM Marc Singer wrote:
>
> Hi
hey Christian, i'm guessing this relates to
https://tracker.ceph.com/issues/63373 which tracks a deadlock in s3
DeleteObjects requests when multisite is enabled.
rgw_multi_obj_del_max_aio can be set to 1 as a workaround until the
reef backport lands
On Wed, Mar 6, 2024 at 2:41 PM Christian Kugler
thanks Giada, i see that you created
https://tracker.ceph.com/issues/64547 for this
unfortunately, this topic metadata doesn't really have a permission
model at all. topics are shared across the entire tenant, and all
users have access to read/overwrite those topics
a lot of work was done for htt
Estimate on release timeline for 17.2.8?
- after pacific 16.2.15 and reef 18.2.2 hotfix
(https://tracker.ceph.com/issues/64339,
https://tracker.ceph.com/issues/64406)
Estimate on release timeline for 19.2.0?
- target April, depending on testing and RCs
- Testing plan for Squid beyond dev freeze (r
run here, approved
>
> ceph-volume - Guillaume, fixed by
> https://github.com/ceph/ceph/pull/55658 retesting
>
> On Thu, Feb 8, 2024 at 8:43 AM Casey Bodley wrote:
> >
> > thanks, i've created https://tracker.ceph.com/issues/64360 to track
> > these backpo
i've cc'ed Matt who's working on the s3 object integrity feature
https://docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html,
where rgw compares the generated checksum with the client's on ingest,
then stores it with the object so clients can read it back for later
integrit
thanks, i've created https://tracker.ceph.com/issues/64360 to track
these backports to pacific/quincy/reef
On Thu, Feb 8, 2024 at 7:50 AM Stefan Kooman wrote:
>
> Hi,
>
> Is this PR: https://github.com/ceph/ceph/pull/54918 included as well?
>
> You definitely want to build the Ubuntu / debian pac
On Fri, Feb 2, 2024 at 11:21 AM Chris Palmer wrote:
>
> Hi Matthew
>
> AFAIK the upgrade from quincy/deb11 to reef/deb12 is not possible:
>
> * The packaging problem you can work around, and a fix is pending
> * You have to upgrade both the OS and Ceph in one step
> * The MGR will not run un
On Mon, Jan 29, 2024 at 4:39 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/64151#note-1
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura, Travis, Ernesto, Adam King
> rgw - Casey
rgw approved, thanks
> fs - Venky
> rbd -
On Wed, Jan 31, 2024 at 3:43 AM garcetto wrote:
>
> good morning,
> i was struggling trying to understand why i cannot find this setting on
> my reef version, is it because is only on latest dev ceph version and not
> before?
that's right, this new feature will be part of the squid release. we
my understanding is that default placement is stored at the bucket
level, so changes to the user's default placement only take effect for
newly-created buckets
On Sun, Nov 12, 2023 at 9:48 PM Huy Nguyen wrote:
>
> Hi community,
> I'm using Ceph version 16.2.13. I tried to set default_storage_clas
ithi/7450325/
> >>
> >> Seems to be related to nfs-ganesha. I've reached out to Frank Filz
> >> (#cephfs on ceph slack) to have a look. WIll update as soon as
> >> possible.
> >>
> >> > orch - Adam King
> >> > rbd - Ilya app
were in v16.2.12.
>>>> We upgraded the cluster to v17.2.7 two days ago and it seems obvious that
>>>> the IAM error logs are generated the next minute rgw daemon upgraded from
>>>> v16.2.12 to v17.2.7. Looks like there is some issue with parsing.
>>>&g
PM Wesley Dillingham
> wrote:
>>
>> Thank you, this has worked to remove the policy.
>>
>> Respectfully,
>>
>> *Wes Dillingham*
>> w...@wesdillingham.com
>> LinkedIn <http://www.linkedin.com/in/wesleydillingham>
>>
>>
>> On W
On Mon, Nov 6, 2023 at 4:31 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63443#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek, Prashant, Venky (POOL_APP_NOT_ENABLE failures)
> rados - Neha, Radek, Travis, Ernesto,
quincy 17.2.7: released!
* major 'dashboard v3' changes causing issues?
https://github.com/ceph/ceph/pull/54250 did not merge for 17.2.7
* planning a retrospective to discuss what kind of changes should go
in minor releases when members of the dashboard team are present
reef 18.2.1:
* most PRs alr
another option is to enable the rgw ops log, which includes the bucket
name for each request
the http access log line that's visible at log level 1 follows a known
apache format that users can scrape, so i've resisted adding extra
s3-specific stuff like bucket/object names there. there was some
re
dillingham.com
> LinkedIn
>
>
> On Wed, Oct 25, 2023 at 4:41 PM Casey Bodley wrote:
>>
>> if you have an administrative user (created with --admin), you should
>> be able to use its credentials with awscli to delete or overwrite this
>> bucket policy
>>
>&
if you have an administrative user (created with --admin), you should
be able to use its credentials with awscli to delete or overwrite this
bucket policy
On Wed, Oct 25, 2023 at 4:11 PM Wesley Dillingham
wrote:
>
> I have a bucket which got injected with bucket policy which locks the
> bucket e
idi
wrote:
>
> Thanks Casey for your explanation,
>
> Yes it succeeded eventually. Sometimes after about 100 retries. It's odd that
> it stays in racing condition for that much time.
>
> Best Regards,
> Mahnoosh
>
> On Tue, Oct 24, 2023 at 5:17 PM Casey Bodley wrote:
>&
errno 125 is ECANCELED, which is the code we use when we detect a
racing write. so it sounds like something else is modifying that user
at the same time. does it eventually succeed if you retry?
On Tue, Oct 24, 2023 at 9:21 AM mahnoosh shahidi
wrote:
>
> Hi all,
>
> I couldn't understand what doe
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be fixed for this release
#x27;t configured. But knowing where to inject
> the magic that activates that interface eludes me and whether to do it
> directly on the RGW container hos (and how) or on my master host is
> totally unclear to me. It doesn't help that this is an item that has
> multiple values, not
On Mon, Oct 16, 2023 at 2:52 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be fixed for this release
hey Tim,
your changes to rgw_admin_entry probably aren't taking effect on the
running radosgws. you'd need to restart them in order to set up the
new route
there also seems to be some confusion about the need for a bucket
named 'default'. radosgw just routes requests with paths starting with
'/{r
we're tracking this in https://tracker.ceph.com/issues/61882. my
understanding is that we're just waiting for the next quincy point
release builds to resolve this
On Tue, Oct 10, 2023 at 11:07 AM Graham Derryberry
wrote:
>
> I have just started adding a ceph client on a rocky 9 system to our ceph
hi Arvydas,
it looks like this change corresponds to
https://tracker.ceph.com/issues/48322 and
https://github.com/ceph/ceph/pull/38234. the intent was to enforce the
same limitation as AWS S3 and force clients to use multipart copy
instead. this limit is controlled by the config option
rgw_max_put
On Mon, Oct 9, 2023 at 9:16 AM Gilles Mocellin
wrote:
>
> Hello Cephers,
>
> I was using Ceph with OpenStack, and users could add, remove credentials
> with `openstack ec2 credentials` commands.
> But, we are moving our Object Storage service to a new cluster, and
> didn't want to tie it with Open
thanks Tobias, i see that https://github.com/ceph/ceph/pull/53414 had
a ton of test failures that don't look related. i'm working with Yuri
to reschedule them
On Thu, Oct 5, 2023 at 2:05 AM Tobias Urdin wrote:
>
> Hello Yuri,
>
> On the RGW side I would very much like to get this [1] patch in tha
On Tue, Oct 3, 2023 at 9:06 AM Thomas Bennett wrote:
>
> Hi Jonas,
>
> Thanks :) that solved my issue.
>
> It would seem to me that this is heading towards something that the clients
> s3 should paginate, but I couldn't find any documentation on how to
> paginate bucket listings.
the s3 ListBucke
On Sat, Sep 23, 2023 at 5:05 AM Matthias Ferdinand wrote:
>
> On Fri, Sep 22, 2023 at 06:09:57PM -0400, Casey Bodley wrote:
> > each radosgw does maintain its own cache for certain metadata like
> > users and buckets. when one radosgw writes to a metadata object, it
> > b
each radosgw does maintain its own cache for certain metadata like
users and buckets. when one radosgw writes to a metadata object, it
broadcasts a notification (using rados watch/notify) to other radosgws
to update/invalidate their caches. the initiating radosgw waits for
all watch/notify response
; You can see the the read 1~10 in the osd logs I’ve sent here -
> https://pastebin.com/nGQw4ugd
>
> Which is wierd as it seems that it is not the same you were able to replicate.
>
> Ondrej
>
> On 22. 9. 2023, at 21:52, Casey Bodley wrote:
>
> hey Ondrej,
>
> th
hey Ondrej,
thanks for creating the tracker issue
https://tracker.ceph.com/issues/62938. i added a comment there, and
opened a fix in https://github.com/ceph/ceph/pull/53602 for the only
issue i was able to identify
On Wed, Sep 20, 2023 at 9:20 PM Ondřej Kukla wrote:
>
> I was checking the track
between sites).
>
> Thanks for anything you or others can offer.
for rgw multisite users in particular, i highly recommend trying out
the reef release. in addition to multisite resharding support, we made
a lot of improvements to multisite stability/reliability that we won't
be able to
these keys starting with "<80>0_" appear to be replication log entries
for multisite. can you confirm that this is a multisite setup? is the
'bucket sync status' mostly caught up on each zone? in a healthy
multisite configuration, these log entries would eventually get
trimmed automatically
On Wed
thanks Shashi, this regression is tracked in
https://tracker.ceph.com/issues/62771. we're testing a fix
On Sat, Sep 16, 2023 at 7:32 PM Shashi Dahal wrote:
>
> Hi All,
>
> We have 3 openstack clusters, each with their own ceph. The openstack
> versions are identical( using openstack-ansible) an
On Wed, Aug 23, 2023 at 10:41 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62527#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Venky
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
> dashb
you could potentially create a cls_crypt object class that exposes
functions like crypt_read() and crypt_write() to do this. but your
application would have to use cls_crypt for all reads/writes instead
of the normal librados read/write operations. would that work for you?
On Wed, Aug 23, 2023 at
On Thu, Aug 17, 2023 at 12:14 PM wrote:
>
> Hello,
>
> Yes, I can see that there are metrics to check the size of the compressed
> data stored in a pool with ceph df detail (relevant columns are USED COMPR
> and UNDER COMPR)
>
> Also the size of compressed data can be checked on osd level using
thanks Louis,
that looks like the same backtrace as
https://tracker.ceph.com/issues/61763. that issue has been on 'Need
More Info' because all of the rgw logging was disabled there. are you
able to share some more log output to help us figure this out?
under "--- begin dump of recent events ---",
On Mon, Jul 31, 2023 at 11:38 AM Yuri Weinstein wrote:
>
> Thx Casey
>
> If you agree I will merge https://github.com/ceph/ceph/pull/52710
> ?
yes please
>
> On Mon, Jul 31, 2023 at 8:34 AM Casey Bodley wrote:
> >
> > On Sun, Jul 30, 2023 at 11:46 AM Yuri Wein
On Sun, Jul 30, 2023 at 11:46 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62231#note-1
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek
> rados - Neha, Radek, Travis, Ernesto, Adam King
> rgw - Casey
the pacific upgrade
Welcome to Aviv Caro as new Ceph NVMe-oF lead
Reef status:
* reef 18.1.3 built, gibba cluster upgraded, plan to publish this week
* https://pad.ceph.com/p/reef_final_blockers all resolved except for
bookworm builds https://tracker.ceph.com/issues/61845
* only blockers will merge to reef so the rel
_bug.cgi?id=2196790
>
> I was interested to see almost all of these are already in progress .
> That final one (logutils) should go to EPEL's stable repo in a week
> (faster with karma).
>
> - Ken
>
>
>
>
> On Wed, Apr 26, 2023 at 11:00 AM Casey Bodley wro
On Mon, Jul 10, 2023 at 10:40 AM wrote:
>
> Hi,
>
> yes, this is incomplete multiparts problem.
>
> Then, how do admin delete the incomplete multipart object?
> I mean
> 1. can admin find incomplete job and incomplete multipart object?
> 2. If first question is possible, then can admin delete all
while a bucket is resharding, rgw will retry several times internally
to apply the write before returning an error to the client. while most
buckets can be resharded within seconds, very large buckets may hit
these timeouts. any other cause of slow osd ops could also have that
effect. it can be hel
usefulness may be more justified.>
> Regards,Yixin
>
> On Friday, June 30, 2023 at 11:29:16 a.m. EDT, Casey Bodley
> wrote:
>
> you're correct that the distinction is between metadata and data;
> metadata like users and buckets will replicate to all zonegr
On Mon, Jul 3, 2023 at 6:52 AM mahnoosh shahidi wrote:
>
> I think this part of the doc shows that LocationConstraint can override the
> placement and I can change the placement target with this field.
>
> When creating a bucket with the S3 protocol, a placement target can be
> > provided as part
gt; Actually, I reported a documentation bug for something very similar.
>
> On Fri, Jun 30, 2023 at 11:30 PM Casey Bodley wrote:
> >
> > you're correct that the distinction is between metadata and data;
> > metadata like users and buckets will replicate to all zonegroups,
&g
you're correct that the distinction is between metadata and data;
metadata like users and buckets will replicate to all zonegroups,
while object data only replicates within a single zonegroup. any given
bucket is 'owned' by the zonegroup that creates it (or overridden by
the LocationConstraint on c
hi Jayanth,
i don't know that we have a supported way to do this. the
s3-compatible method would be to copy the object onto itself without
requesting server-side encryption. however, this wouldn't prevent
default encryption if rgw_crypt_default_encryption_key was still
enabled. furthermore, rgw ha
hi Boris,
we've been investigating reports of excessive polling from metadata
sync. i just opened https://tracker.ceph.com/issues/61743 to track
this. restarting the secondary zone radosgws should help as a
temporary workaround
On Tue, Jun 20, 2023 at 5:57 AM Boris Behrens wrote:
>
> Hi,
> yeste
On Sat, Jun 17, 2023 at 8:37 AM Vahideh Alinouri
wrote:
>
> Dear Ceph Users,
>
> I am writing to request the backporting changes related to the
> AsioFrontend class and specifically regarding the header_limit value.
>
> In the Pacific release of Ceph, the header_limit value in the
> AsioFrontend c
On Sat, Jun 17, 2023 at 1:11 PM Jayanth Reddy
wrote:
>
> Hello Folks,
>
> I've been experimenting with RGW encryption and found this out.
> Focusing on Quincy and Reef dev, for the SSE (any methods) to work, transit
> has to be end to end encrypted, however if there is a proxy, then [1] can
> be m
On Fri, Jun 16, 2023 at 2:55 AM Christian Rohmann
wrote:
>
> On 15/06/2023 15:46, Casey Bodley wrote:
>
> * In case of HTTP via headers like "X-Forwarded-For". This is
> apparently supported only for logging the source in the "rgw ops log" ([1])?
> Or is t
On Thu, Jun 15, 2023 at 7:23 AM Christian Rohmann
wrote:
>
> Hello Ceph-Users,
>
> context or motivation of my question is S3 bucket policies and other
> cases using the source IP address as condition.
>
> I was wondering if and how RadosGW is able to access the source IP
> address of clients if r
radosgw's object striping does not repeat, so there is no concept of
'stripe width'. rgw_obj_stripe_size just controls the maximum size of
each rados object, so the 'stripe count' is essentially just the total
s3 object size divided by rgw_obj_stripe_size
On Tue, Jun 13, 2023 at 10:22 AM Teja A w
Weinstein wrote:
>
> Casey
>
> I will rerun rgw and we will see.
> Stay tuned.
>
> On Wed, May 31, 2023 at 10:27 AM Casey Bodley wrote:
> >
> > On Tue, May 30, 2023 at 12:54 PM Yuri Weinstein wrote:
> > >
> > > Details of this release are summarized
On Tue, May 30, 2023 at 12:54 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/61515#note-1
> Release Notes - TBD
>
> Seeking approvals/reviews for:
>
> rados - Neha, Radek, Travis, Ernesto, Adam King (we still have to
> merge https://gi
thanks for the report. this regression was already fixed in
https://tracker.ceph.com/issues/58932 and will be in the next quincy
point release
On Wed, May 31, 2023 at 10:46 AM wrote:
>
> I was running on 17.2.5 since October, and just upgraded to 17.2.6, and now
> the "mtime" property on all my
e/overwrite the original copy
>
> Best regards
> Tobias
>
> On 30 May 2023, at 14:48, Casey Bodley wrote:
>
> On Tue, May 30, 2023 at 8:22 AM Tobias Urdin
> mailto:tobias.ur...@binero.com>> wrote:
>
> Hello Casey,
>
> Thanks for the information
fference is
where they get the key
>
> [1]
> https://docs.ceph.com/en/quincy/radosgw/encryption/#automatic-encryption-for-testing-only
>
> > On 26 May 2023, at 22:45, Casey Bodley wrote:
> >
> > Our downstream QE team recently observed an md5 mismatch of replica
Our downstream QE team recently observed an md5 mismatch of replicated
objects when testing rgw's server-side encryption in multisite. This
corruption is specific to s3 multipart uploads, and only affects the
replicated copy - the original object remains intact. The bug likely
affects Ceph releases
rgw supports the 3 flavors of S3 Server-Side Encryption, along with
the PutBucketEncryption api for per-bucket default encryption. you can
find the docs in https://docs.ceph.com/en/quincy/radosgw/encryption/
On Mon, May 22, 2023 at 10:49 AM huxia...@horebdata.cn
wrote:
>
> Dear Alexander,
>
> Tha
That final one (logutils) should go to EPEL's stable repo in a week
> (faster with karma).
>
> - Ken
>
>
>
>
> On Wed, Apr 26, 2023 at 11:00 AM Casey Bodley wrote:
> >
> > are there any volunteers willing to help make these python packages
> > ava
On Wed, May 17, 2023 at 11:13 PM Ramin Najjarbashi
wrote:
>
> Hi
>
> I'm currently using Ceph version 16.2.7 and facing an issue with bucket
> creation in a multi-zone configuration. My setup includes two zone groups:
>
> ZG1 (Master) and ZG2, with one zone in each zone group (zone-1 in ZG1 and
>
i'm afraid that feature will be new in the reef release. multisite
resharding isn't supported on quincy
On Wed, May 17, 2023 at 11:56 AM Alexander Mamonov wrote:
>
> https://docs.ceph.com/en/latest/radosgw/multisite/#feature-resharding
> When I try this I get:
> root@ceph-m-02:~# radosgw-admin zo
sync doesn't distinguish between multipart and regular object uploads.
once a multipart upload completes, sync will replicate it as a single
object using an s3 GetObject request
replicating the parts individually would have some benefits. for
example, when sync retries are necessary, we might only
speed less than 1024 Bytes per second during
> 300 seconds.
> 2023-05-09T15:46:21.069+ 7f20b12b8700 0 WARNING: curl operation timed
> out, network average transfer speed less than 1024 Bytes per second during
> 300 seconds.
> 2023-05-09T15:46:21.069+ 7f2085ff3700 0 rgw a
On Sun, May 7, 2023 at 5:25 PM Yuri Weinstein wrote:
>
> All PRs were cherry-picked and the new RC1 build is:
>
> https://shaman.ceph.com/builds/ceph/pacific-release/8f93a58b82b94b6c9ac48277cc15bd48d4c0a902/
>
> Rados, fs and rgw were rerun and results are summarized here:
> https://tracker.ceph.c
On Thu, Apr 27, 2023 at 5:21 PM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59542#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> smoke - Radek, Laura
> rados - Radek, Laura
> rook - Sébastien Han
> cephadm - Adam K
>
On Thu, Apr 27, 2023 at 11:36 AM Tarrago, Eli (RIS-BCT)
wrote:
>
> After working on this issue for a bit.
> The active plan is to fail over master, to the “west” dc. Perform a realm
> pull from the west so that it forces the failover to occur. Then have the
> “east” DC, then pull the realm data
of those
>> > packages in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620)
>> > requesting that specific package, but that's only one out of the dozen of
>> > missing packages (plus transitive dependencies)...
>> >
>> > Kind Rega
# ceph windows tests
PR check will be made required once regressions are fixed
windows build currently depends on gcc11 which limits use of c++20
features. investigating newer gcc or clang toolchain
# 16.2.13 release
final testing in progress
# prometheus metric regressions
https://tracker.ceph.c
On Sun, Apr 16, 2023 at 11:47 PM Richard Bade wrote:
>
> Hi Everyone,
> I've been having trouble finding an answer to this question. Basically
> I'm wanting to know if stuff in the .log pool is actively used for
> anything or if it's just logs that can be deleted.
> In particular I was wondering a
On Wed, Apr 19, 2023 at 7:55 PM Christopher Durham wrote:
>
> Hi,
>
> I am using 17.2.6 on rocky linux for both the master and the slave site
> I noticed that:
> radosgw-admin sync status
> often shows that the metadata sync is behind a minute or two on the slave.
> This didn't make sense, as the
On Wed, Apr 19, 2023 at 5:13 AM Gaël THEROND wrote:
>
> Hi everyone, quick question regarding radosgw zone data-pool.
>
> I’m currently planning to migrate an old data-pool that was created with
> inappropriate failure-domain to a newly created pool with appropriate
> failure-domain.
>
> If I’m do
On Tue, Apr 11, 2023 at 3:53 PM Casey Bodley wrote:
>
> On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote:
> >
> >
> > Hi,
> > I see that this PR: https://github.com/ceph/ceph/pull/48030
> > made it into ceph 17.2.6, as per the change log at:
> >
On Tue, Apr 11, 2023 at 3:19 PM Christopher Durham wrote:
>
>
> Hi,
> I see that this PR: https://github.com/ceph/ceph/pull/48030
> made it into ceph 17.2.6, as per the change log at:
> https://docs.ceph.com/en/latest/releases/quincy/ That's great.
> But my scenario is as follows:
> I have two
there's a rgw_period_root_pool option for the period objects too. but
it shouldn't be necessary to override any of these
On Sun, Apr 9, 2023 at 11:26 PM wrote:
>
> Up :)
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an emai
t question, I don't know who's the maintainer of those
> > packages in EPEL. There's this BZ (https://bugzilla.redhat.com/2166620)
> > requesting that specific package, but that's only one out of the dozen of
> > missing packages (plus transitive dependenc
On Fri, Mar 24, 2023 at 3:46 PM Yuri Weinstein wrote:
>
> Details of this release are updated here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The slowness we experienced seemed to be self-cured.
> Neha, Radek, and Laura please provide any findings if you have them.
1 - 100 of 196 matches
Mail list logo