[ceph-users] Re: CephFS as Offline Storage

2024-05-21 Thread Matthias Ferdinand
On Tue, May 21, 2024 at 08:54:26PM +, Eugen Block wrote: > It’s usually no problem to shut down a cluster. Set at least the noout flag, > the other flags like norebalance, nobackfill etc won’t hurt either. Then > shut down the servers. I do that all the time with test clusters (they do > have

[ceph-users] Re: rgw: disallowing bucket creation for specific users?

2023-10-06 Thread Matthias Ferdinand
-1" on the CLI with "radosgw quota set". But interesting to see this done in a single step when creating the user. Matthias > > Regards, > > Ondrej > > > On 6. 10. 2023, at 8:44, Matthias Ferdinand wrote: > > > > On Thu, Oct 05, 2023 at 09:22:29AM +0200,

[ceph-users] Re: rgw: disallowing bucket creation for specific users?

2023-10-06 Thread Matthias Ferdinand
]}, > > "Action": [ "s3:DeleteBucket", "s3:DeleteBucketPolicy", > > "s3:PutBucketPolicy" ], > > "Resource": [ > >"arn:aws:s3:::*" > > ] > >}] > > } > > > &

[ceph-users] Re: rgw: disallowing bucket creation for specific users?

2023-10-04 Thread Matthias Ferdinand
On Tue, Oct 03, 2023 at 06:10:17PM +0200, Matthias Ferdinand wrote: > On Sun, Oct 01, 2023 at 12:00:58PM +0200, Peter Goron wrote: > > Hi Matthias, > > > > One possible way to achieve your need is to set a quota on number of > > buckets at user level (see > &

[ceph-users] Re: rgw: disallowing bucket creation for specific users?

2023-10-04 Thread Matthias Ferdinand
ucket owner, and also the bucket policy can't be modified/deleted anymore. This closes the loopholes I could come up with so far; there might still be some left I am currently not aware of :-) On Wed, Oct 04, 2023 at 06:20:09PM +0200, Matthias Ferdinand wrote: > On Tue, Oct 03, 2023 at 06:10:17P

[ceph-users] Re: rgw: disallowing bucket creation for specific users?

2023-10-04 Thread Matthias Ferdinand
On Tue, Oct 03, 2023 at 06:10:17PM +0200, Matthias Ferdinand wrote: > On Sun, Oct 01, 2023 at 12:00:58PM +0200, Peter Goron wrote: > > Hi Matthias, > > > > One possible way to achieve your need is to set a quota on number of > > buckets at user level (see > &

[ceph-users] Re: rgw: disallowing bucket creation for specific users?

2023-10-03 Thread Matthias Ferdinand
rol. thanks a lot, rather an elegant solution. Matthias > > Rgds, > Peter > > > Le dim. 1 oct. 2023, 10:51, Matthias Ferdinand a > écrit : > > > Hi, > > > > I am still evaluating ceph rgw for specific use cases. > > > > My question is

[ceph-users] rgw: disallowing bucket creation for specific users?

2023-10-01 Thread Matthias Ferdinand
Hi, I am still evaluating ceph rgw for specific use cases. My question is about keeping the realm of bucket names under control of rgw admins. Normal S3 users have the ability to create new buckets as they see fit. This opens opportunities for creating excessive amounts of buckets, or for

[ceph-users] Re: rgw: strong consistency for (bucket) policy settings?

2023-09-23 Thread Matthias Ferdinand
window of incoherent behaviour among rgw daemons (one rgw applying old policy to requests, some other rgw already applying new policy), or will it just be a very short window? thanks Matthias > > On Fri, Sep 22, 2023 at 5:53 PM Matthias Ferdinand > wrote: > > > > On Tue, Sep 12, 20

[ceph-users] Re: rgw: strong consistency for (bucket) policy settings?

2023-09-22 Thread Matthias Ferdinand
On Tue, Sep 12, 2023 at 07:13:13PM +0200, Matthias Ferdinand wrote: > On Mon, Sep 11, 2023 at 02:37:59PM -0400, Matt Benjamin wrote: > > Yes, it's also strongly consistent. It's also last writer wins, though, so > > two clients somehow permitted to contend for updating policy coul

[ceph-users] Re: Join us for the User + Dev Relaunch, happening this Thursday!

2023-09-22 Thread Matthias Ferdinand
On Thu, Sep 21, 2023 at 03:49:25PM -0500, Laura Flores wrote: > Hi Ceph users and developers, > > Big thanks to Cory Snyder and Jonas Sterr for sharing your insights with an > audience of 50+ users and developers! > > Cory shared some valuable troubleshooting tools and tricks that would be >

[ceph-users] Re: rgw: strong consistency for (bucket) policy settings?

2023-09-12 Thread Matthias Ferdinand
onfirming this! Matthias > > On Mon, Sep 11, 2023 at 2:21 PM Matthias Ferdinand > wrote: > > > Hi, > > > > while I don't currently use rgw, I still am curious about consistency > > guarantees. > > > > Usually, S3 has strong read-after-write con

[ceph-users] rgw: strong consistency for (bucket) policy settings?

2023-09-11 Thread Matthias Ferdinand
Hi, while I don't currently use rgw, I still am curious about consistency guarantees. Usually, S3 has strong read-after-write consistency guarantees (for requests that do not overlap). According to https://docs.ceph.com/en/latest/dev/radosgw/bucket_index/ in Ceph this is also true for

[ceph-users] Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

2023-04-17 Thread Matthias Ferdinand
Hi, > > Matthias suggest to enable write cache, you suggest to disble it... or i'm > > cache-confused?! ;-) there were some discussions about write cache settings last year, e.g. https://www.spinics.net/lists/ceph-users/msg73263.html https://www.spinics.net/lists/ceph-users/msg69489.html

[ceph-users] Re: Some hint for a DELL PowerEdge T440/PERC H750 Controller...

2023-04-06 Thread Matthias Ferdinand
spinners are slow anyway, but on top of that SAS disks often default to writecache=off. In use as a single disk with no risk of raid write-holes, you can turn on writecache. On SAS, I would assume the firmware does not lie about writes reaching stable storage (flushes). # turn on temporarily:

[ceph-users] Re: monitoring apply_latency / commit_latency ?

2023-04-02 Thread Matthias Ferdinand
On Thu, Mar 30, 2023 at 08:56:06PM +0400, Konstantin Shalygin wrote: > Hi, > > > On 25 Mar 2023, at 23:15, Matthias Ferdinand wrote: > > > > from "ceph daemon osd.X perf dump"? > > > No, from ceph-mgr prometheus exporter > You can enable

[ceph-users] Re: monitoring apply_latency / commit_latency ?

2023-03-25 Thread Matthias Ferdinand
lient->OSD command timing? - are bluestore/filestore values about OSD->storage op timing? Please bear with me :-) I just try to get some rough understanding what the numbers to be collected and graphed actually mean and how they are related to each other. Regards Matthias > > O

[ceph-users] monitoring apply_latency / commit_latency ?

2023-03-24 Thread Matthias Ferdinand
Hi, I would like to understand how the per-OSD data from "ceph osd perf" (i.e. apply_latency, commit_latency) is generated. So far I couldn't find documentation on this. "ceph osd perf" output is nice for a quick glimpse, but is not very well suited for graphing. Output values are from the most

[ceph-users] Re: Ceph Bluestore tweaks for Bcache

2023-03-21 Thread Matthias Ferdinand
uot;# setting rotational=1 on ${r}" echo "1" >${r} fi fi fi fi #--- On Thu, Feb 02, 2023 at 12:18:55AM +0100, Matthias Ferdinand wrote: > ceph version: 17.2.0 on Ubuntu 22.04 &g

[ceph-users] Re: Ceph Bluestore tweaks for Bcache

2023-02-01 Thread Matthias Ferdinand
ceph version: 17.2.0 on Ubuntu 22.04 non-containerized ceph from Ubuntu repos cluster started on luminous I have been using bcache on filestore on rotating disks for many years without problems. Now converting OSDs to bluestore, there are some strange effects. If I

[ceph-users] ceph-users list archive missing almost all mail

2023-01-08 Thread Matthias Ferdinand
nt threads are found, everything else says "no email threads could be found for this month". Could somebody please look into this? Regards Matthias Ferdinand ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: octopus rbd cluster just stopped out of nowhere (>20k slow ops)

2022-12-07 Thread Matthias Ferdinand
On Wed, Dec 07, 2022 at 11:13:49AM +0100, Boris Behrens wrote: > Hi Sven, > thanks for the input. > > So I did some testing and "maybe" optimization. > The same disk type in two different hosts (one Ubuntu and one Centos7) have > VERY different iostat %util values: I guess Centos7 has a rather

[ceph-users] docs.ceph.com inaccessibla via Tor

2022-11-05 Thread Matthias Ferdinand
the cloudflare settings accordingly? Several years ago, they would group Tor nodes under a "Tor" pseudo-country. If that is still the case, it might suffice to tick that pseudo-country off from a "dangerous countries" list or something like that. Best regard

[ceph-users] Re: dashboard on Ubuntu 22.04: python3-cheroot incompatibility

2022-07-22 Thread Matthias Ferdinand
On Fri, Jul 22, 2022 at 04:54:23PM +0100, James Page wrote: > > If I remove the version check (see below), dashboard appears to be working. > > > https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1967139 > > I just uploaded a fix for cheroot to resolve this issue - the stable > release update

[ceph-users] dashboard on Ubuntu 22.04: python3-cheroot incompatibility

2022-07-22 Thread Matthias Ferdinand
Hi, trying to activate ceph dashboard on a 17.2.0 cluster (Ubuntu 22.04 using standard ubuntu repos), the dashboard module crashes because it cannot understand the python3-cheroot version number '8.5.2+ds1': root@mceph00:~# ceph crash info

[ceph-users] ethernet bond mac address collision after Ubuntu upgrade

2022-07-21 Thread Matthias Ferdinand
Hi, just a heads up for others using Ubuntu and both ethernet bonding and image cloning when provisioning ceph servers: mac address selection for bond interfaces was changed to only depend on /etc/machine-id. Having several machines sharing the same /etc/machine-id then wreaks havoc. I

[ceph-users] Re: mgr service restarted by package install?

2022-07-18 Thread Matthias Ferdinand
; > 2022-07-13T11:43:41.308+0200 7f71c0c86700 1 mgr handle_mgr_map > respawning because set of enabled modules changed! > > Cheers, Dan > > > On Sat, Jul 16, 2022 at 4:34 PM Matthias Ferdinand > wrote: > > > > Hi, > > > > while updating a test c

[ceph-users] mgr service restarted by package install?

2022-07-16 Thread Matthias Ferdinand
"ceph version 17.2.1 (ec95624474b1871a821a912b8c3af68f8f8e7aa1) quincy (stable)": 3 } } Not sure how problematic this is, but AFAIK it was claimed that ceph package installs would not restart ceph services by themselves. Regards Matthias Ferdinand __

[ceph-users] Re: docs dangers large raid

2021-06-29 Thread Matthias Ferdinand
On Tue, Jun 29, 2021 at 08:37:36AM +, Marc wrote: > > Can someone point me to some good doc's describing the dangers of using a > large amount of disks in a raid5/raid6? (Understandable for less techy people) Hi, there are some slides at

[ceph-users] Re: Can not mount rbd device anymore

2021-06-23 Thread Matthias Ferdinand
On Tue, Jun 22, 2021 at 02:36:00PM +0200, Ml Ml wrote: > Hello List, > > oversudden i can not mount a specific rbd device anymore: > > root@proxmox-backup:~# rbd map backup-proxmox/cluster5 -k > /etc/ceph/ceph.client.admin.keyring > /dev/rbd0 > > root@proxmox-backup:~# mount /dev/rbd0

[ceph-users] Re: XFS on RBD on EC painfully slow

2021-05-28 Thread Matthias Ferdinand
On Thu, May 27, 2021 at 02:54:00PM -0500, Reed Dier wrote: > Hoping someone may be able to help point out where my bottleneck(s) may be. > > I have an 80TB kRBD image on an EC8:2 pool, with an XFS filesystem on top of > that. > This was not an ideal scenario, rather it was a rescue mission to

[ceph-users] Re: any experience on using Bcache on top of HDD OSD

2021-04-20 Thread Matthias Ferdinand
On Tue, Apr 20, 2021 at 08:27:50AM +0200, huxia...@horebdata.cn wrote: > Dear Mattias, > > Very glad to know that your setting with Bcache works well in production. > > How long have you been puting XFS on bcache on HDD in production? Which > bcache version (i mean the kernel) do you use? or

[ceph-users] Re: any experience on using Bcache on top of HDD OSD

2021-04-19 Thread Matthias Ferdinand
On Sun, Apr 18, 2021 at 10:31:30PM +0200, huxia...@horebdata.cn wrote: > Dear Cephers, > > Just curious about any one who has some experience on using Bcache on top of > HDD OSD to accelerate IOPS performance? > > If any, how about the stability and the performance improvement, and for how >

[ceph-users] Re: Best practices for OSD on bcache

2021-03-02 Thread Matthias Ferdinand
On Tue, Mar 02, 2021 at 05:47:29PM +0800, Norman.Kern wrote: > Matthias,  > > I agreed with you for tuning. I  ask this question just for that my OSDs have > problems when the > > cache_available_percent less than 30, the SSDs almost useless and all I/Os > bypass to HDDs with large latency.

[ceph-users] Re: Best practices for OSD on bcache

2021-03-01 Thread Matthias Ferdinand
On Mon, Mar 01, 2021 at 12:37:38PM +0800, Norman.Kern wrote: > Hi, guys > > I am testing ceph on bcache devices,  I found the performance is not > good as expected. Does anyone have any best practices for it?  Thanks. Hi, sorry to say, but since use cases and workloads differ so much, there is

[ceph-users] subscriptions from lists.ceph.com now on lists.ceph.io?

2019-09-02 Thread Matthias Ferdinand
com does not work anymore. Created a new account, logged in and to me the subscription settings look ok. Can you help me here? Maybe it is just the digests that do not work? Please answer to me directly, as I am currently not receiving any list messages. Thank you Matthias Ferdin