Update, I managed to limit the user privilege by modifying the user's
op-mask to read as follows:
```
radosgw-admin user modify --uid= --op-mask=read
```
And to rollback its default privileges:
```
radosgw-admin user modify --uid= --op-mask="read,write,delete"
```
Kind regards,
Ch
only?
I could not find any clear explanation and example in the Ceph
radosgw-admin docs. Is it by changing the user's caps or op_mask? Or
setting the civetweb option to only allow HTTP HEAD and GET methods?
Kind regards,
Charles Alva
Sent from Gmail Mobile
Got it. Thanks for the explanation, Jason!
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Tue, May 21, 2019 at 5:16 PM Jason Dillaman wrote:
> On Tue, May 21, 2019 at 12:03 PM Charles Alva
> wrote:
> >
> > Hi Jason,
> >
> > Should we disable fstrim servic
Hi Jason,
Should we disable fstrim services inside VM which runs on top of RBD?
I recall Ubuntu OS has weekly fstrim cronjob enabled by default, while we
have to enable fstrim service manually on Debian and CentOS.
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Tue, May 21, 2019, 4:49
Got it. Thanks, Mark!
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Fri, Apr 12, 2019 at 10:53 PM Mark Nelson wrote:
> They have the same issue, but depending on the SSD may be better at
> absorbing the extra IO if network or CPU are bigger bottlenecks. That's
> one of th
Thanks Mark,
This is interesting. I'll take a look at the links you provided.
Does rocksdb compacting issue only affect HDDs? Or SSDs are having same
issue?
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Fri, Apr 12, 2019, 9:01 PM Mark Nelson wrote:
> Hi Charles,
>
>
&g
ause
rebuilding the OSD one by one will take forever.
It's ceph-bluestore-tool.
Is there any official documentation on how to online migrate the
WAL+DB to SSD? I guess this feature is not backported to Luminous
right?
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Fri, Apr 12, 2019 at 10:
the rocksdb reaches level 4 with 67GB data to compact.
Kind regards,
Charles Alva
Sent from Gmail Mobile
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi All,
Is there a way to trim Ceph default.rgw.log pool so it won't take huge
space? Or perhaps is there logrotate mechanism in placed?
Kind regards,
Charles Alva
Sent from Gmail Mobile
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
cache tier
method as explained at https://ceph.com/geen-categorie/ceph-pool-migration/.
Is cache tier method feasible?
Kind regards,
Charles Alva
Sent from Gmail Mobile
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
I see, thanks for the detailed information, Sage!
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Tue, Jun 5, 2018 at 1:39 AM Sage Weil wrote:
> [adding ceph-maintainers]
>
> On Mon, 4 Jun 2018, Charles Alva wrote:
> > Hi Guys,
> >
> > When will the Cep
Hi Guys,
When will the Ceph Mimic packages for Debian Stretch released? I could not
find the packages even after changing the sources.list.
Kind regards,
Charles Alva
Sent from Gmail Mobile
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
Hi John,
Just upgraded both Ceph clusters to 12.2.5 and it seemed the new version
fixed the issue. Executing `for i in {1..3}; do ceph mds metadata mds$i;
done` produced no error. I could only see the successful send beacon
message in Ceph MGR logs. Thanks!
Kind regards,
Charles Alva
Sent
Will do, thanks!
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Thu, Apr 26, 2018 at 9:46 PM John Spray <jsp...@redhat.com> wrote:
> On Thu, Apr 26, 2018 at 2:20 PM, Charles Alva <charlesa...@gmail.com>
> wrote:
> > I see. Do you need any log or debug output, Jo
I see. Do you need any log or debug output, John?
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Thu, Apr 26, 2018 at 7:46 PM John Spray <jsp...@redhat.com> wrote:
> On Wed, Apr 25, 2018 at 1:42 PM, Charles Alva <charlesa...@gmail.com>
> wrote:
> > Hi Joh
el(R) Xeon(R) CPU E31240 @ 3.30GHz",
"distro": "ubuntu",
"distro_description": "Ubuntu 16.04.4 LTS",
"distro_version": "16.04",
"hostname": "mds3",
"kernel_description": "#1 SMP PVE 4.
ds1: (2) No such file or directory
> 2018-04-20 06:21:26.051641 7fca14809700 0 ms_deliver_dispatch: unhandled
> message 0x55bf89835600 mgrreport(mds.mds1 +24-0 packed 214) v5 from mds.0
> 10.100.100.114:6800/4132681434
> 2018-04-20 06:21:26.052169 7fca25102700 1 mgr finish mon fai
Hi Marc,
I'm using CephFS and mgr could not get the metadata of the mds. I enabled
the dashboard module and everytime I visit the ceph filesystem page, it got
internal error 500.
Kind regards,
Charles Alva
Sent from Gmail Mobile
On Fri, Apr 20, 2018 at 9:24 AM, Marc Roos <m.r...
educe disk IO and
increase SSD life span?
Kind regards,
Charles Alva
Sent from Gmail Mobile
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
19 matches
Mail list logo