[ceph-users] Re: [EXTERNAL] Re: RGW Bucket Notifications and MultiPart Uploads

2022-07-20 Thread Mark Selby
I have not tested with Quincy/17.x yet so I do not know which notifications are sent for Multipart uploads in this release set. I know that for Pacific.16.x I needed to add some code/logic to only act on notifications that represented the end state of an Object creation. My tests show that

[ceph-users] Re: replacing OSD nodes

2022-07-20 Thread Janne Johansson
Den ons 20 juli 2022 kl 11:22 skrev Jesper Lykkegaard Karlsen : > Thanks for you answer Janne. > Yes, I am also running "ceph osd reweight" on the "nearfull" osds, once they > get too close for comfort. > > But I just though a continuous prioritization of rebalancing PGs, could make > this

[ceph-users] Using cloudbase windows RBD / wnbd with pre-pacific clusters

2022-07-20 Thread Wesley Dillingham
I understand that the client side code available from cloudbase started being distributed with pacific and now quincy client code but is there any particular reason it shouldn't work in conjunction with a nautilus, for instance, cluster. We have seen some errors when trying to do IO with mapped

[ceph-users] Re: [EXTERNAL] Re: RGW Bucket Notifications and MultiPart Uploads

2022-07-20 Thread Daniel Gryniewicz
Seems like the notification for a multipart upload should look different to a normal upload? Daniel On 7/20/22 08:53, Yehuda Sadeh-Weinraub wrote: Can maybe leverage one of the other calls to check for upload completion: list multipart uploads and/or list parts. The latter should work if you

[ceph-users] Re: [EXTERNAL] Re: RGW Bucket Notifications and MultiPart Uploads

2022-07-20 Thread Yehuda Sadeh-Weinraub
Can maybe leverage one of the other calls to check for upload completion: list multipart uploads and/or list parts. The latter should work if you have the upload id at hand. Yehuda On Wed, Jul 20, 2022, 8:40 AM Casey Bodley wrote: > On Wed, Jul 20, 2022 at 12:57 AM Yuval Lifshitz > wrote: > >

[ceph-users] Re: [EXTERNAL] Re: RGW Bucket Notifications and MultiPart Uploads

2022-07-20 Thread Casey Bodley
On Wed, Jul 20, 2022 at 12:57 AM Yuval Lifshitz wrote: > > yes, that would work. you would get a "404" until the object is fully > uploaded. just note that you won't always get 404 before multipart complete, because multipart uploads can overwrite existing objects

[ceph-users] Re: Quincy: cephfs "df" used 6x higher than "du"

2022-07-20 Thread Jake Grimmett
Dear All, Just noticed that ceph osd df shows "Raw Use" of ~360 GiB per OSD, with 65GiB Data stored, see below. Is the disparity between du and df due to low level OSD data structures (?) consuming a large proportion of space (~300GiB per OSD, 130TB total), compared to the 25TB of actual

[ceph-users] Quincy: cephfs "df" used 6x higher than "du"

2022-07-20 Thread Jake Grimmett
Dear All, We have just built a new cluster using Quincy 17.2.1 After copying ~25TB to the cluster (from a mimic cluster), we see 152 TB used, which is ~6x disparity. Is this just a ceph accounting error, or is space being wasted? [root@wilma-s1 ~]# du -sh /cephfs2/users 24T

[ceph-users] Re: rh8 krbd mapping causes no match of type 1 in addrvec problem decoding monmap, -2

2022-07-20 Thread Ilya Dryomov
On Tue, Jul 19, 2022 at 9:55 PM Wesley Dillingham wrote: > > > Thanks. > > Interestingly the older kernel did not have a problem with it but the newer > kernel does. The older kernel can't communicate via v2 protocol so it doesn't (need to) distinguish v1 and v2 addresses. Thanks,

[ceph-users] Re: replacing OSD nodes

2022-07-20 Thread Jesper Lykkegaard Karlsen
Thanks for you answer Janne. Yes, I am also running "ceph osd reweight" on the "nearfull" osds, once they get too close for comfort. But I just though a continuous prioritization of rebalancing PGs, could make this process more smooth, with less/no need for handheld operations. Best, Jesper

[ceph-users] Re: replacing OSD nodes

2022-07-20 Thread Janne Johansson
Den tis 19 juli 2022 kl 13:09 skrev Jesper Lykkegaard Karlsen : > > Hi all, > Setup: Octopus - erasure 8-3 > I had gotten to the point where I had some rather old OSD nodes, that I > wanted to replace with new ones. > The procedure was planned like this: > > * add new replacement OSD nodes >