Re: [ceph-users] list admin issues

2018-10-12 Thread shubjero
Happens to me too, on gmail. I'm on half a dozen other mailman lists with no issues at all. I've escalate this problem to the ceph mailing list maintainer and they said its an issue with their provider, but this was probably a year ago. On Tue, Oct 9, 2018 at 7:04 AM Elias Abacioglu <

[ceph-users] Self serve / automated S3 key creation?

2019-01-31 Thread shubjero
Has anyone automated the ability to generate S3 keys for OpenStack users in Ceph? Right now we take in a users request manually (Hey we need an S3 API key for our OpenStack project 'X', can you help?). We as cloud/ceph admins just use radosgw-admin to create them an access/secret key pair for

Re: [ceph-users] large omap object in usage_log_pool

2019-05-27 Thread shubjero
wrote: > > > On 5/24/19 1:15 PM, shubjero wrote: > > Thanks for chiming in Konstantin! > > > > Wouldn't setting this value to 0 disable the sharding? > > > > Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/ > > > > rgw override bucket ind

[ceph-users] large omap object in usage_log_pool

2019-05-23 Thread shubjero
Hi there, We have an old cluster that was built on Giant that we have maintained and upgraded over time and are now running Mimic 13.2.5. The other day we received a HEALTH_WARN about 1 large omap object in the pool '.usage' which is our usage_log_pool defined in our radosgw zone. I am trying to

Re: [ceph-users] large omap object in usage_log_pool

2019-05-24 Thread shubjero
Thanks for chiming in Konstantin! Wouldn't setting this value to 0 disable the sharding? Reference: http://docs.ceph.com/docs/mimic/radosgw/config-ref/ rgw override bucket index max shards Description:Represents the number of shards for the bucket index object, a value of zero indicates there

[ceph-users] radosgw user audit trail

2019-07-08 Thread shubjero
Good day, We have a sizeable ceph deployment and use object-storage heavily. We also integrate our object-storage with OpenStack but sometimes we are required to create S3 keys for some of our users (aws-cli, java apps that speak s3, etc). I was wondering if it is possible to see an audit trail

Re: [ceph-users] Global power failure, OpenStack Nova/libvirt/KVM, and Ceph RBD locks

2019-11-19 Thread shubjero
Florian, Thanks for posting about this issue. This is something that we have been experiencing (stale exclusive locks) with our OpenStack and Ceph cloud more frequently as our datacentre has had some reliability issues recently with power and cooling causing several unexpected shutdowns. At this