Re: [ceph-users] How to limit radosgw user privilege to read only mode?

2019-09-30 Thread Charles Alva
Update, I managed to limit the user privilege by modifying the user's op-mask to read as follows: ``` radosgw-admin user modify --uid= --op-mask=read ``` And to rollback its default privileges: ``` radosgw-admin user modify --uid= --op-mask="read,write,delete" ``` Kind regards, Ch

[ceph-users] How to limit radosgw user privilege to read only mode?

2019-09-29 Thread Charles Alva
only? I could not find any clear explanation and example in the Ceph radosgw-admin docs. Is it by changing the user's caps or op_mask? Or setting the civetweb option to only allow HTTP HEAD and GET methods? Kind regards, Charles Alva Sent from Gmail Mobile

Re: [ceph-users] Slow requests from bluestore osds / crashing rbd-nbd

2019-05-21 Thread Charles Alva
Got it. Thanks for the explanation, Jason! Kind regards, Charles Alva Sent from Gmail Mobile On Tue, May 21, 2019 at 5:16 PM Jason Dillaman wrote: > On Tue, May 21, 2019 at 12:03 PM Charles Alva > wrote: > > > > Hi Jason, > > > > Should we disable fstrim servic

Re: [ceph-users] Slow requests from bluestore osds / crashing rbd-nbd

2019-05-21 Thread Charles Alva
Hi Jason, Should we disable fstrim services inside VM which runs on top of RBD? I recall Ubuntu OS has weekly fstrim cronjob enabled by default, while we have to enable fstrim service manually on Debian and CentOS. Kind regards, Charles Alva Sent from Gmail Mobile On Tue, May 21, 2019, 4:49

Re: [ceph-users] How to reduce HDD OSD flapping due to rocksdb compacting event?

2019-04-12 Thread Charles Alva
Got it. Thanks, Mark! Kind regards, Charles Alva Sent from Gmail Mobile On Fri, Apr 12, 2019 at 10:53 PM Mark Nelson wrote: > They have the same issue, but depending on the SSD may be better at > absorbing the extra IO if network or CPU are bigger bottlenecks. That's > one of th

Re: [ceph-users] How to reduce HDD OSD flapping due to rocksdb compacting event?

2019-04-12 Thread Charles Alva
Thanks Mark, This is interesting. I'll take a look at the links you provided. Does rocksdb compacting issue only affect HDDs? Or SSDs are having same issue? Kind regards, Charles Alva Sent from Gmail Mobile On Fri, Apr 12, 2019, 9:01 PM Mark Nelson wrote: > Hi Charles, > > &g

Re: [ceph-users] How to reduce HDD OSD flapping due to rocksdb compacting event?

2019-04-11 Thread Charles Alva
ause rebuilding the OSD one by one will take forever. It's ceph-bluestore-tool. Is there any official documentation on how to online migrate the WAL+DB to SSD? I guess this feature is not backported to Luminous right? Kind regards, Charles Alva Sent from Gmail Mobile On Fri, Apr 12, 2019 at 10:

[ceph-users] How to reduce HDD OSD flapping due to rocksdb compacting event?

2019-04-10 Thread Charles Alva
the rocksdb reaches level 4 with 67GB data to compact. Kind regards, Charles Alva Sent from Gmail Mobile ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] How to trim default.rgw.log pool?

2019-02-14 Thread Charles Alva
Hi All, Is there a way to trim Ceph default.rgw.log pool so it won't take huge space? Or perhaps is there logrotate mechanism in placed? Kind regards, Charles Alva Sent from Gmail Mobile ___ ceph-users mailing list ceph-users@lists.ceph.com http

[ceph-users] Minimal downtime when changing Erasure Code plugin on Ceph RGW

2018-12-06 Thread Charles Alva
cache tier method as explained at https://ceph.com/geen-categorie/ceph-pool-migration/. Is cache tier method feasible? Kind regards, Charles Alva Sent from Gmail Mobile ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com

Re: [ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-04 Thread Charles Alva
I see, thanks for the detailed information, Sage! Kind regards, Charles Alva Sent from Gmail Mobile On Tue, Jun 5, 2018 at 1:39 AM Sage Weil wrote: > [adding ceph-maintainers] > > On Mon, 4 Jun 2018, Charles Alva wrote: > > Hi Guys, > > > > When will the Cep

[ceph-users] Ceph Mimic on Debian 9 Stretch

2018-06-03 Thread Charles Alva
Hi Guys, When will the Ceph Mimic packages for Debian Stretch released? I could not find the packages even after changing the sources.list. Kind regards, Charles Alva Sent from Gmail Mobile ___ ceph-users mailing list ceph-users@lists.ceph.com http

Re: [ceph-users] Ceph 12.2.4 MGR spams syslog with "mon failed to return metadata for mds"

2018-04-26 Thread Charles Alva
Hi John, Just upgraded both Ceph clusters to 12.2.5 and it seemed the new version fixed the issue. Executing `for i in {1..3}; do ceph mds metadata mds$i; done` produced no error. I could only see the successful send beacon message in Ceph MGR logs. Thanks! Kind regards, Charles Alva Sent

Re: [ceph-users] Ceph 12.2.4 MGR spams syslog with "mon failed to return metadata for mds"

2018-04-26 Thread Charles Alva
Will do, thanks! Kind regards, Charles Alva Sent from Gmail Mobile On Thu, Apr 26, 2018 at 9:46 PM John Spray <jsp...@redhat.com> wrote: > On Thu, Apr 26, 2018 at 2:20 PM, Charles Alva <charlesa...@gmail.com> > wrote: > > I see. Do you need any log or debug output, Jo

Re: [ceph-users] Ceph 12.2.4 MGR spams syslog with "mon failed to return metadata for mds"

2018-04-26 Thread Charles Alva
I see. Do you need any log or debug output, John? Kind regards, Charles Alva Sent from Gmail Mobile On Thu, Apr 26, 2018 at 7:46 PM John Spray <jsp...@redhat.com> wrote: > On Wed, Apr 25, 2018 at 1:42 PM, Charles Alva <charlesa...@gmail.com> > wrote: > > Hi Joh

Re: [ceph-users] Ceph 12.2.4 MGR spams syslog with "mon failed to return metadata for mds"

2018-04-25 Thread Charles Alva
el(R) Xeon(R) CPU E31240 @ 3.30GHz", "distro": "ubuntu", "distro_description": "Ubuntu 16.04.4 LTS", "distro_version": "16.04", "hostname": "mds3", "kernel_description": "#1 SMP PVE 4.

Re: [ceph-users] Ceph 12.2.4 MGR spams syslog with "mon failed to return metadata for mds"

2018-04-20 Thread Charles Alva
ds1: (2) No such file or directory > 2018-04-20 06:21:26.051641 7fca14809700 0 ms_deliver_dispatch: unhandled > message 0x55bf89835600 mgrreport(mds.mds1 +24-0 packed 214) v5 from mds.0 > 10.100.100.114:6800/4132681434 > 2018-04-20 06:21:26.052169 7fca25102700 1 mgr finish mon fai

Re: [ceph-users] Ceph 12.2.4 MGR spams syslog with "mon failed to return metadata for mds"

2018-04-20 Thread Charles Alva
Hi Marc, I'm using CephFS and mgr could not get the metadata of the mds. I enabled the dashboard module and everytime I visit the ceph filesystem page, it got internal error 500. Kind regards, Charles Alva Sent from Gmail Mobile On Fri, Apr 20, 2018 at 9:24 AM, Marc Roos <m.r...

[ceph-users] Ceph 12.2.4 MGR spams syslog with "mon failed to return metadata for mds"

2018-04-20 Thread Charles Alva
educe disk IO and increase SSD life span? Kind regards, Charles Alva Sent from Gmail Mobile ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com