On Thu, Oct 3, 2019 at 6:46 PM Marc Roos wrote:
>
> >
> >>
> >> I was following the thread where you adviced on this pg repair
> >>
> >> I ran these rados 'list-inconsistent-obj'/'rados
> >> list-inconsistent-snapset' and have output on the snapset. I tried
> to
> >> extrapolate your
Thanks Matt! Really useful configs. I am still on luminous, so I can
forget about this now :( I will try when I am nautilus, I have already
updated my configuration. However it is interesting that in the
configuration nowhere the tenant is specified, so I guess that is being
extracted from
"Path" is either "/" to indicate the top of the tree, or a bucket name
to indicate a limited export for a single bucket. It's not related to
the user at all.
On Thu, Oct 3, 2019 at 10:34 AM Marc Roos wrote:
>
>
> How should a multi tenant RGW config look like, I am not able get this
> working:
Hi Mark,
Here's an example that should work--userx and usery are RGW users
created in different tenants, like so:
radosgw-admin --tenant tnt1 --uid userx --display-name "tnt1-userx" \
--access_key "userxacc" --secret "test123" user create
radosgw-admin --tenant tnt2 --uid usery
We have tried running nfs-ganesha (2.7 - 2.8.1) with FSAL_CEPH backed by
a Nautilus CephFS. Performance when doing metadata operations (ie
anything with small files) is very slow.
On Thu, Oct 3, 2019 at 10:34 AM Marc Roos wrote:
>
>
> How should a multi tenant RGW config look like, I am not able
How should a multi tenant RGW config look like, I am not able get this
working:
EXPORT {
Export_ID=301;
Path = "test:test3";
#Path = "/";
Pseudo = "/rgwtester";
Protocols = 4;
FSAL {
Name = RGW;
User_Id = "test$tester1";
And, just as unexpectedly, things have returned to normal overnight
https://icecube.wisc.edu/~vbrik/graph-1.png
The change seems to have coincided with the beginning of Rados Gateway
activity (before, it was essentially zero). I can see nothing in the
logs that would explain what happened
RGW NFS can support any NFS style of authentication, but users will
have the RGW access of their nfs-ganesha export. You can create
exports with disjoint privileges, and since recent L, N, RGW tenants.
Matt
On Tue, Oct 1, 2019 at 8:31 AM Marc Roos wrote:
>
> I think you can run into problems
So, Ganesha is an NFS gateway, living in userspace. It provides
access via NFS (for any NFS client) to a number of clustered storage
systems, or to local filesystems on it's host. It can run on any
system that has access to the cluster (ceph in this case). One
Ganesha instance can serve quite a
Thank you. Do we have a quick document to do this migration?
Thanks
Swami
On Thu, Oct 3, 2019 at 4:38 PM Paul Emmerich wrote:
> On Thu, Oct 3, 2019 at 12:03 PM M Ranga Swami Reddy
> wrote:
> >
> > Below url says: "Switching from a standalone deployment to a multi-site
> replicated deployment
On Thu, Oct 3, 2019 at 12:03 PM M Ranga Swami Reddy
wrote:
>
> Below url says: "Switching from a standalone deployment to a multi-site
> replicated deployment is not supported.
> https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-rgw-multisite.html
this is wrong,
on both pg's
[root@ceph-n10 ~]# zgrep "2.2a7" /var/log/ceph/ceph-osd.83.log*
/var/log/ceph/ceph-osd.83.log-20191002.gz:2019-10-01 07:19:47.060
7f9adab4b700 -1 log_channel(cluster) log [ERR] : 2.2a7 repair 11 errors,
0 fixed
/var/log/ceph/ceph-osd.83.log-20191003.gz:2019-10-02 09
Below url says: "Switching from a standalone deployment to a multi-site
replicated deployment is not supported.
https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-rgw-multisite.html
Please advise.
On Thu, Oct 3, 2019 at 3:28 PM M Ranga Swami Reddy
wrote:
> Hi,
>
Hi,
Iam using the 2 ceph clusters in diff DCs (away by 500 KM) with ceph
12.2.11 version.
Now, I want to setup rgw multisite using the above 2 ceph clusters.
is it possible? if yes, please share good document to do the same.
Thanks
Swami
___
ceph-users
Thank you Robin.
Looking at the video it doesn't seem like a fix is anywhere near ready.
Am I correct in concluding that Ceph is not the right tool for my use-case?
Cheers,
Christian
On Oct 3 2019, at 6:07 am, Robin H. Johnson wrote:
> On Wed, Oct 02, 2019 at 01:48:40PM +0200, Christian
>
>>
>> I was following the thread where you adviced on this pg repair
>>
>> I ran these rados 'list-inconsistent-obj'/'rados
>> list-inconsistent-snapset' and have output on the snapset. I tried
to
>> extrapolate your comment on the data/omap_digest_mismatch_info onto
my
>>
Hi,
Is there any way to query the list of dirty objects inside tier/hot pool? I
just know how to see the number of them per pool.
Best regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,I often observed now that the recovery/rebalance in Nautilus starts quite
fast but gets extremely slow (2-3 objects/s) even if there are like 20 OSDs
involved. Right now I am moving (reweighted to 0) 16x8TB disks, it's running
since 4 days and since 12h it's kind of stuck now at
cluster:
18 matches
Mail list logo