Hi, I have a Ceph cluster v16.2.10
To use STS lite, my configures are like the following:
ceph.conf
...
[client.rgw.ss-rgw-01]
host = ss-rgw-01
rgw_frontends = beast port=8080
rgw_zone=backup-hapu
admin_socket = /var/run/ceph/ceph-client.rgw.ss-rgw-01
rgw_sts_key = qekd3Rd5zXr0adQx
rgw_s3_auth_use
each radosgw does maintain its own cache for certain metadata like
users and buckets. when one radosgw writes to a metadata object, it
broadcasts a notification (using rados watch/notify) to other radosgws
to update/invalidate their caches. the initiating radosgw waits for
all watch/notify response
On Tue, Sep 12, 2023 at 07:13:13PM +0200, Matthias Ferdinand wrote:
> On Mon, Sep 11, 2023 at 02:37:59PM -0400, Matt Benjamin wrote:
> > Yes, it's also strongly consistent. It's also last writer wins, though, so
> > two clients somehow permitted to contend for updating policy could
> > overwrite e
that first "read 0~4194304" is probably what i fixed in
https://github.com/ceph/ceph/pull/53602, but it's hard to tell from
osd log where these osd ops are coming from. why are there several
[read 1~10] requests after that? the rgw log would be more useful for
debugging, with --debug-rgw=20 and --d
Hello Casey,
Thanks a lot for that.
I’ve forgot to mention that in my previous message that I was able to trigger
the prefetch by header bytes=1-10
You can see the the read 1~10 in the osd logs I’ve sent here -
https://pastebin.com/nGQw4ugd
Which is wierd as it seems that it is not the same y
hey Ondrej,
thanks for creating the tracker issue
https://tracker.ceph.com/issues/62938. i added a comment there, and
opened a fix in https://github.com/ceph/ceph/pull/53602 for the only
issue i was able to identify
On Wed, Sep 20, 2023 at 9:20 PM Ondřej Kukla wrote:
>
> I was checking the track
On Thu, Sep 21, 2023 at 03:49:25PM -0500, Laura Flores wrote:
> Hi Ceph users and developers,
>
> Big thanks to Cory Snyder and Jonas Sterr for sharing your insights with an
> audience of 50+ users and developers!
>
> Cory shared some valuable troubleshooting tools and tricks that would be
> help
Casey,
I did fix this. Here is what I did:
1. Stopped write access to the bucket
2. After I stopped the writes:
# radosgw-admin bucket sync status --bucket
showed just the one shard that was behind, matching the shard number that has
all the extra 0_ index objects.
3. then did:
# radosgw-ad
Hello Venky,
Nice to hear from you :) Hope you are doing well.
I tried as you suggested,
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# mkdir dir1
dir2
root@ss-joe-01(bash):/mnt/cephfs/volumes/_nogroup/4/user_root# echo "Hello
Worldls!" > file2
root@ss-joe-01(bash):/mnt/cephfs/vo
Hi Joseph,
On Fri, Sep 22, 2023 at 5:27 PM Joseph Fernandes wrote:
>
> Hello All,
>
> I found a weird issue with ceph_readdirplus_r() when used along
> with ceph_ll_lookup_vino().
> On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy
> (stable)
>
> Any help is really apprecia
Hi,
For the record, in the past we faced a similar issue with OSDs being killed
one after each other every day starting from midnight.
The root cause was linked to device_health_check launched by mgr on each
OSD.
While OSD is doing device_health_check, OSD admin socket is busy and can't
answer to
reattaching files
On Fri, Sep 22, 2023 at 5:25 PM Joseph Fernandes
wrote:
> Hello All,
>
> I found a weird issue with ceph_readdirplus_r() when used along
> with ceph_ll_lookup_vino().
> On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy
> (stable)
>
> Any help is really ap
re-attaching the files
On Fri, Sep 22, 2023 at 5:25 PM Joseph Fernandes
wrote:
> Hello All,
>
> I found a weird issue with ceph_readdirplus_r() when used along
> with ceph_ll_lookup_vino().
> On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy
> (stable)
>
> Any help is real
Hello All,
I found a weird issue with ceph_readdirplus_r() when used along
with ceph_ll_lookup_vino().
On ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy
(stable)
Any help is really appreciated.
Thanks in advance,
-Joe
Test Scenario :
A. Create a Ceph Fs Subvolume "4" and
Hi,
is it possible to use one cephx key for multiple parallel running RGW?
Maybe I could just use the same 'name' and the same key for all of the RGW
instances?
I plan to start RGWs all over the place in container and let BGP handle the
traffic. But I don't know how to create on demand keys, that
On Fri, Sep 22, 2023 at 8:40 AM Dominique Ramaekers
wrote:
>
> Hi,
>
> A question to avoid using a to elaborate method in finding de most recent
> snapshot of a RBD-image.
>
> So, what would be the preferred way to find the latest snapshot of this image?
>
> root@hvs001:/# rbd snap ls libvirt-poo
16 matches
Mail list logo