Re: [ceph-users] About the NFS on RGW

2016-03-22 Thread Matt Benjamin
Hi Xusangdi,

NFS on RGW is not intended as an alternative to CephFS.  The basic idea is to 
expose the S3 namespace using Amazon's prefix+delimiter convention (delimiter 
currently limited to '/').  We use opens for atomicity, which implies NFSv4 (or 
4.1).  In addition to limitations by design, there are some limitations in 
Jewel.  For example, clients should use (or emulate) sync mount behavior.  
Also, I/O is proxied--that restriction should be lifted in future releases.  
I'll post here when we have some usage documentation ready.

Matt

- Original Message -
> From: "Xusangdi" <xu.san...@h3c.com>
> To: mbenja...@redhat.com, ceph-us...@ceph.com
> Cc: ceph-de...@vger.kernel.org
> Sent: Tuesday, March 22, 2016 8:12:41 AM
> Subject: About the NFS on RGW
> 
> Hi Matt & Cephers,
> 
> I am looking for advise on setting up a file system based on Ceph. As CephFS
> is not yet productive ready(or I missed some breakthroughs?), the new NFS on
> RadosGW should be a promising alternative, especially for large files, which
> is what we are most interested in. However, after searching around the Ceph
> documentation (http://docs.ceph.com/docs/master/) and recent community
> mails, I cannot find much information about it. Could you please provide
> some introduction about the new NFS, and (if possible) a raw way to try it?
> Thank you!
> 
> Regards,
> ---Sandy
> -
> 本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
> 的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
> 或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
> 邮件!
> This e-mail and its attachments contain confidential information from H3C,
> which is
> intended only for the person or entity whose address is listed above. Any use
> of the
> information contained herein in any way (including, but not limited to, total
> or partial
> disclosure, reproduction, or dissemination) by persons other than the
> intended
> recipient(s) is prohibited. If you receive this e-mail in error, please
> notify the sender
> by phone or email immediately and delete it!
> 

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-707-0660
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Any Docs to configure NFS to access RADOSGW buckets on Jewel

2016-04-27 Thread Matt Benjamin
Hi WD,

No, it's not the same.  The new mechanism uses an nfs-ganesha server to export 
the RGW namespace.  Some upstream documentation will be forthcoming...

Regards,

Matt

- Original Message -
> From: "WD Hwang" <wd_hw...@wistron.com>
> To: "a jazdzewski" <a.jazdzew...@googlemail.com>
> Cc: ceph-users@lists.ceph.com
> Sent: Wednesday, April 27, 2016 5:03:12 AM
> Subject: Re: [ceph-users] Any Docs to configure NFS to access RADOSGW buckets 
> on Jewel
> 
> Hi Ansgar,
>   Thanks for your information.
>   I have tried 's3fs-fuse' to mount RADOSGW buckets on Ubuntu client node. It
>   works.
>   But I am not sure this is the technique that access RADOSGW buckets via NFS
>   on Jewel.
> 
> Best Regards,
> WD
> 
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Ansgar Jazdzewski
> Sent: Wednesday, April 27, 2016 4:32 PM
> To: WD Hwang/WHQ/Wistron
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Any Docs to configure NFS to access RADOSGW buckets
> on Jewel
> 
> all informations i have so far are from the FOSDEM
> 
> https://fosdem.org/2016/schedule/event/virt_iaas_ceph_rados_gateway_overview/attachments/audio/1079/export/events/attachments/virt_iaas_ceph_rados_gateway_overview/audio/1079/Fosdem_RGW.pdf
> 
> Cheers,
> Ansgar
> 
> 2016-04-27 2:28 GMT+02:00  <wd_hw...@wistron.com>:
> > Hello:
> >
> >   Are there any documents or examples to explain the configuration of
> > NFS to access RADOSGW buckets on Jewel?
> >
> > Thanks a lot.
> >
> >
> >
> > Best Regards,
> >
> > WD
> >
> > --
> > --
> > ---
> >
> > This email contains confidential or legally privileged information and
> > is for the sole use of its intended recipient.
> >
> > Any unauthorized review, use, copying or distribution of this email or
> > the content of this email is strictly prohibited.
> >
> > If you are not the intended recipient, you may reply to the sender and
> > should delete this e-mail immediately.
> >
> > --
> > --
> > ---
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-707-0660
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help on RGW NFS function

2016-09-21 Thread Matt Benjamin
Hi,

- Original Message -
> From: "yiming xie" <plato...@gmail.com>
> To: ceph-users@lists.ceph.com
> Sent: Wednesday, September 21, 2016 3:53:35 AM
> Subject: [ceph-users] Help on RGW NFS function
> 
> Hi,
> I have some question about rgw nfs.
> 
> ceph release notes: You can now access radosgw buckets via NFS
> (experimental).
> In addition to the sentence, ceph documents does not do any explanation
> I don't understand the experimental implications.
> 
> 1. RGW nfs functional integrity of it? If nfs function is not complete, which
> features missing?

NFSv4 only initially (but NFS3 support just added on master).  The I/O model is 
simplified.  Objects in RGW cannot be mutated in place, and the NFS client 
always overwrites.  Clients are currently expected to write sequentually from 
offset 0--on Linux, you should mount with -osync.

The RGW/S3 namespace is an emulation of a posix one using substring search, so 
we impose some limitations.  You cannot move directories, is one.  There are 
likely be published limits on bucket/object listing.

Some bugfixes are still in backport to Jewel.  That release supports NFSv4 and 
not NFS3.

> 2. How stable is the RGW nfs?

Some features are still being backported to Jewel.  I've submitted one 
important bugfix on master this week.  We are aiming for "general usability" in 
over the next 1-2 months (NFSv4).

> 3. RGW nfs latest version can be used in a production environment yet?

If you're conservative, it's probably not "ready."  Now would be a good time to 
experiment with the feature and see whether it is potentially useful to you.

Matt

> 
> Please reply to my question as soon as possible. Very grateful, thank you!
> 
> plato.xie
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-707-0660
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RADOSGW and LDAP

2016-09-16 Thread Matt Benjamin
Hi Brian,

This issue is fixed upstream in commit 08d54291435e.  It looks like this did 
not make it to Jewel, we're prioritizing this, and will follow up when this and 
any related LDAP and NFS commits make it there.

Thanks for bringing this to our attention!

Matt

- Original Message -
> From: "Brian Contractor Andrus" <bdand...@nps.edu>
> To: ceph-users@lists.ceph.com
> Sent: Thursday, September 15, 2016 12:56:29 PM
> Subject: [ceph-users] RADOSGW and LDAP
> 
> 
> 
> All,
> 
> I have been making some progress on troubleshooting this.
> 
> I am seeing that when rgw is configured for LDAP, I am getting an error in my
> slapd log:
> 
> 
> 
> Sep 14 06:56:21 mgmt1 slapd[23696]: conn=1762 op=0 RESULT tag=97 err=2
> text=historical protocol version requested, use LDAPv3 instead
> 
> 
> 
> Am I correct with an interpretation that rgw does not do LDAPv3?
> Is there a way to enable this, or must I allow older versions in my OpenLDAP
> configuration?
> 
> 
> 
> Brian Andrus
> 
> ITACS/Research Computing
> 
> Naval Postgraduate School
> 
> Monterey, California
> 
> voice: 831-656-6238
> 
> 
> 
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-707-0660
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Any librados C API users out there?

2017-01-12 Thread Matt Benjamin
Hi,

- Original Message -
> From: "Yehuda Sadeh-Weinraub" <ysade...@redhat.com>
> To: "Sage Weil" <sw...@redhat.com>
> Cc: "Gregory Farnum" <gfar...@redhat.com>, "Jason Dillaman" 
> <dilla...@redhat.com>, "Piotr Dałek"
> <piotr.da...@corp.ovh.com>, "ceph-devel" <ceph-de...@vger.kernel.org>, 
> "ceph-users" <ceph-users@lists.ceph.com>
> Sent: Thursday, January 12, 2017 3:22:06 PM
> Subject: Re: [ceph-users] Any librados C API users out there?
> 
> On Thu, Jan 12, 2017 at 12:08 PM, Sage Weil <sw...@redhat.com> wrote:
> > On Thu, 12 Jan 2017, Gregory Farnum wrote:
> >> On Thu, Jan 12, 2017 at 5:54 AM, Jason Dillaman <jdill...@redhat.com>
> >> wrote:
> >> > There is option (3) which is to have a new (or modified)
> >> > "buffer::create_static" take an optional callback to invoke when the
> >> > buffer::raw object is destructed. The raw pointer would be destructed
> >> > when the last buffer::ptr / buffer::list containing it is destructed,
> >> > so you know it's no longer being referenced.
> >> >
> >> > You could then have the new C API methods that wrap the C buffer in a
> >> > bufferlist and set a new flag in the librados::AioCompletion to delay
> >> > its completion until after it's both completed and the memory is
> >> > released. When the buffer is freed, the callback would unblock the
> >> > librados::AioCompltion completion callback.
> >>
> >> I much prefer an approach like this: it's zero-copy; it's not a lot of
> >> user overhead; but it requires them to explicitly pass memory off to
> >> Ceph and keep it immutable until Ceph is done (at which point they are
> >> told so explicitly).
> >
> > Yeah, this is simpler.  I still feel like we should provide a way to
> > revoke buffers, though, because otherwise it's possible for calls to block
> > semi-indefinitey if, say, an old MOSDOp is quueed for another OSD and that
> > OSD is not reading data off the socket but has not failed (e.g., due to
> > it's rx throttling).
> >
> 
> We need to provide some way to cancel requests (at least from the
> client's aspect), that would guarantee that buffers are not going to
> be used (and no completion callback is going to be called).

is the client/consumer cancellation async wrt completion?  a cancellation in 
that case could ensure that, if it succeeds, those guarantees are met, or else 
fails (because the callback and completion have raced cancellation)?

Matt

> 
> Yehuda
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Mapping data and metadata between rados and cephfs

2017-06-28 Thread Matt Benjamin
Hi,

That's true, sure.  We hope to support async mounts and more normal workflows 
in future, but those are important caveats.  Editing objects in place doesn't 
work with RGW NFS.

Matt

- Original Message -
> From: "Gregory Farnum" <gfar...@redhat.com>
> To: "Matt Benjamin" <mbenja...@redhat.com>, "David Turner" 
> <drakonst...@gmail.com>
> Cc: ceph-users@lists.ceph.com
> Sent: Wednesday, June 28, 2017 4:14:39 PM
> Subject: Re: [ceph-users] Mapping data and metadata between rados and cephfs
> 
> On Wed, Jun 28, 2017 at 2:10 PM Matt Benjamin <mbenja...@redhat.com> wrote:
> 
> > Hi,
> >
> > A supported way to access S3 objects from a filesystem mount is with RGW
> > NFS.  That is, RGW now exports the S3 namespace directly as files and
> > directories, one consumer is an nfs-ganesha NFS driver.
> >
> 
> This supports a very specific subset of use cases/fs operations though,
> right? You can use it if you're just doing bulk file shuffling but it's not
> a way to upload via S3 and then perform filesystem update-in-place
> operations in any reasonable fashion (which is what I think was described
> in the original query).
> -Greg
> 
> 
> >
> > Regards,
> >
> > Matt
> >
> > - Original Message -
> > > From: "David Turner" <drakonst...@gmail.com>
> > > To: "Jonathan Lefman" <jonathan.lef...@intel.com>,
> > ceph-users@lists.ceph.com
> > > Sent: Wednesday, June 28, 2017 2:59:12 PM
> > > Subject: Re: [ceph-users] Mapping data and metadata between rados and
> > cephfs
> > >
> > > CephFS is very different from RGW. You may be able to utilize s3fs-fuse
> > to
> > > interface with RGW, but I haven't heard of anyone using that on the ML
> > > before.
> > >
> > > On Wed, Jun 28, 2017 at 2:57 PM Lefman, Jonathan <
> > jonathan.lef...@intel.com
> > > > wrote:
> > >
> > >
> > >
> > >
> > >
> > > Thanks for the prompt reply. I was hoping that there would be an s3fs (
> > > https://github.com/s3fs-fuse/s3fs-fuse ) equivalent for Ceph since
> > there are
> > > numerous functional similarities. Ideally one would be able to upload
> > data
> > > to a bucket and have the file synced to the local filesystem mount of
> > that
> > > bucket. This is similar to the idea of uploading data through RadosGW and
> > > have the data be available in CephFS.
> > >
> > >
> > >
> > > -Jon
> > >
> > >
> > >
> > > From: David Turner [mailto: drakonst...@gmail.com ]
> > > Sent: Wednesday, June 28, 2017 2:51 PM
> > >
> > >
> > >
> > > To: Lefman, Jonathan < jonathan.lef...@intel.com >;
> > ceph-users@lists.ceph.com
> > > Subject: Re: [ceph-users] Mapping data and metadata between rados and
> > cephfs
> > >
> > >
> > >
> > >
> > >
> > > CephFS and RGW store data differently. I have never heard of, nor do I
> > > believe that it's possible, to have CephFS and RGW sharing the same data
> > > pool.
> > >
> > >
> > >
> > >
> > >
> > > On Wed, Jun 28, 2017 at 2:48 PM Lefman, Jonathan <
> > jonathan.lef...@intel.com
> > > > wrote:
> > >
> > >
> > >
> > >
> > >
> > > Yes, sorry. I meant the RadosGW. I still do not know what the mechanism
> > is to
> > > enable the mapping between data inserted by the rados component and the
> > > cephfs component. I hope that makes sense.
> > >
> > >
> > >
> > > -Jon
> > >
> > >
> > >
> > > From: David Turner [mailto: drakonst...@gmail.com ]
> > > Sent: Wednesday, June 28, 2017 2:46 PM
> > > To: Lefman, Jonathan < jonathan.lef...@intel.com >;
> > ceph-users@lists.ceph.com
> > > Subject: Re: [ceph-users] Mapping data and metadata between rados and
> > cephfs
> > >
> > >
> > >
> > >
> > >
> > > You want to access the same data via a rados API and via cephfs? Are you
> > > thinking RadosGW?
> > >
> > >
> > >
> > >
> > >
> > > On Wed, Jun 28, 2017 at 1:54 PM Lefman, Jonathan <
> > jonathan.lef...@intel.com
> > > > wrote:
> > >
> > >
> > >
> > >
> > >
> &

Re: [ceph-users] Mapping data and metadata between rados and cephfs

2017-06-28 Thread Matt Benjamin
Hi,

A supported way to access S3 objects from a filesystem mount is with RGW NFS.  
That is, RGW now exports the S3 namespace directly as files and directories, 
one consumer is an nfs-ganesha NFS driver.

Regards,

Matt

- Original Message -
> From: "David Turner" <drakonst...@gmail.com>
> To: "Jonathan Lefman" <jonathan.lef...@intel.com>, ceph-users@lists.ceph.com
> Sent: Wednesday, June 28, 2017 2:59:12 PM
> Subject: Re: [ceph-users] Mapping data and metadata between rados and cephfs
> 
> CephFS is very different from RGW. You may be able to utilize s3fs-fuse to
> interface with RGW, but I haven't heard of anyone using that on the ML
> before.
> 
> On Wed, Jun 28, 2017 at 2:57 PM Lefman, Jonathan < jonathan.lef...@intel.com
> > wrote:
> 
> 
> 
> 
> 
> Thanks for the prompt reply. I was hoping that there would be an s3fs (
> https://github.com/s3fs-fuse/s3fs-fuse ) equivalent for Ceph since there are
> numerous functional similarities. Ideally one would be able to upload data
> to a bucket and have the file synced to the local filesystem mount of that
> bucket. This is similar to the idea of uploading data through RadosGW and
> have the data be available in CephFS.
> 
> 
> 
> -Jon
> 
> 
> 
> From: David Turner [mailto: drakonst...@gmail.com ]
> Sent: Wednesday, June 28, 2017 2:51 PM
> 
> 
> 
> To: Lefman, Jonathan < jonathan.lef...@intel.com >; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Mapping data and metadata between rados and cephfs
> 
> 
> 
> 
> 
> CephFS and RGW store data differently. I have never heard of, nor do I
> believe that it's possible, to have CephFS and RGW sharing the same data
> pool.
> 
> 
> 
> 
> 
> On Wed, Jun 28, 2017 at 2:48 PM Lefman, Jonathan < jonathan.lef...@intel.com
> > wrote:
> 
> 
> 
> 
> 
> Yes, sorry. I meant the RadosGW. I still do not know what the mechanism is to
> enable the mapping between data inserted by the rados component and the
> cephfs component. I hope that makes sense.
> 
> 
> 
> -Jon
> 
> 
> 
> From: David Turner [mailto: drakonst...@gmail.com ]
> Sent: Wednesday, June 28, 2017 2:46 PM
> To: Lefman, Jonathan < jonathan.lef...@intel.com >; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Mapping data and metadata between rados and cephfs
> 
> 
> 
> 
> 
> You want to access the same data via a rados API and via cephfs? Are you
> thinking RadosGW?
> 
> 
> 
> 
> 
> On Wed, Jun 28, 2017 at 1:54 PM Lefman, Jonathan < jonathan.lef...@intel.com
> > wrote:
> 
> 
> 
> 
> 
> Hi all,
> 
> 
> 
> I would like to create a 1-to-1 mapping between rados and cephfs. Here's the
> usage scenario:
> 
> 
> 
> 1. Upload file via rest api through rados compatible APIs
> 
> 2. Run "local" operations on the file delivered via rados on the linked
> cephfs mount
> 
> 3. Retrieve/download file via rados API on newly created data available on
> the cephfs mount
> 
> 
> 
> I would like to know whether this is possible out-of-the-box; this will never
> work; or this may work with a bit of effort. If this is possible, can this
> be achieved in a scalable manner to accommodate multiple (10s to 100s) users
> on the same system?
> 
> 
> 
> I asked this question in #ceph and #ceph-devel. So far, there have not been
> replies with a way to accomplish this. Thank you.
> 
> 
> 
> -Jon
> 
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous radosgw S3/Keystone integration issues

2018-05-04 Thread Matt Benjamin
Hi Dan,

We agreed in upstream RGW to make this change.  Do you intend to
submit this as a PR?

regards

Matt

On Fri, May 4, 2018 at 10:57 AM, Dan van der Ster <d...@vanderster.com> wrote:
> Hi Valery,
>
> Did you eventually find a workaround for this? I *think* we'd also
> prefer rgw to fallback to external plugins, rather than checking them
> before local. But I never understood the reasoning behind the change
> from jewel to luminous.
>
> I saw that there is work towards a cache for ldap [1] and I assume a
> similar approach would be useful for keystone as well.
>
> In the meantime, would a patch like [2] work?
>
> Cheers, Dan
>
> [1] https://github.com/ceph/ceph/pull/20624
>
> [2] diff --git a/src/rgw/rgw_auth_s3.h b/src/rgw/rgw_auth_s3.h
> index 6bcdebaf1c..3c343adf66 100644
> --- a/src/rgw/rgw_auth_s3.h
> +++ b/src/rgw/rgw_auth_s3.h
> @@ -129,20 +129,17 @@ public:
>add_engine(Control::SUFFICIENT, anonymous_engine);
>  }
>
> +/* The local auth. */
> +if (cct->_conf->rgw_s3_auth_use_rados) {
> +  add_engine(Control::SUFFICIENT, local_engine);
> +}
> +
>  /* The external auth. */
>  Control local_engine_mode;
>  if (! external_engines.is_empty()) {
>add_engine(Control::SUFFICIENT, external_engines);
> -
> -  local_engine_mode = Control::FALLBACK;
> -} else {
> -  local_engine_mode = Control::SUFFICIENT;
>  }
>
> -/* The local auth. */
> -if (cct->_conf->rgw_s3_auth_use_rados) {
> -  add_engine(local_engine_mode, local_engine);
> -}
>}
>
>const char* get_name() const noexcept override {
>
>
> On Thu, Feb 1, 2018 at 4:44 PM, Valery Tschopp <valery.tsch...@switch.ch> 
> wrote:
>> Hi,
>>
>> We are operating a Luminous 12.2.2 radosgw, with the S3 Keystone
>> authentication enabled.
>>
>> Some customers are uploading millions of objects per bucket at once,
>> therefore the radosgw is doing millions of s3tokens POST requests to the
>> Keystone. All those s3tokens requests to Keystone are the same (same
>> customer, same EC2 credentials). But because there is no cache in radosgw
>> for the EC2 credentials, every incoming S3 operation generates a call to the
>> external auth Keystone. It can generate hundreds of s3tokens requests per
>> second to Keystone.
>>
>> We had already this problem with Jewel, but we implemented a workaround. The
>> EC2 credentials of the customer were added directly in the local auth engine
>> of radosgw. So for this particular heavy user, the radosgw local
>> authentication was checked first, and no external auth request to Keystone
>> was necessary.
>>
>> But the default behavior for the S3 authentication have change in Luminous.
>>
>> In Luminous, if you enable the S3 Keystone authentication, every incoming S3
>> operation will first check for anonymous authentication, then external
>> authentication (Keystone and/or LDAP), and only then local authentication.
>> See https://github.com/ceph/ceph/blob/master/src/rgw/rgw_auth_s3.h#L113-L141
>>
>> Is there a way to get the old authentication behavior (anonymous -> local ->
>> external) to work again?
>>
>> Or is it possible to implement a caching mechanism (similar to the Token
>> cache) for the EC2 credentials?
>>
>> Cheers,
>> Valery
>>
>> --
>> SWITCH
>> Valéry Tschopp, Software Engineer
>> Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
>> email: valery.tsch...@switch.ch phone: +41 44 268 1544
>>
>> 30 years of pioneering the Swiss Internet.
>> Celebrate with us at https://swit.ch/30years
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW bucket sharding in Jewel

2018-06-19 Thread Matt Benjamin
The increased time to list sharded buckets is currently expected, yes.
In turn other operations such as put and delete should be faster in
proportion to two factors, the number of shards on independent PGs
(serialization by PG), and the spread of shards onto independent OSD
devices (speedup from scaling onto more OSD devices, presuming
available iops on those devices).

New bucket index formats are coming in future to help listing
workloads.  Also, as of recent master (and probably Jewel and Luminous
at this point, modulo some latency for the backports) we have added an
"allow-unordered" option to S3 and Swift listing arguments that should
remove the penalty from sharding.  This causes results to be returned
in partial order, rather than the total order most applications
expect.

Matt

On Tue, Jun 19, 2018 at 9:34 AM, Matthew Vernon  wrote:
> Hi,
>
> Some of our users have Quite Large buckets (up to 20M objects in a
> bucket), and AIUI best practice would be to have sharded indexes for
> those buckets (of the order of 1 shard per 100k objects).
>
> On a trivial test case (make a 1M-object bucket, shard index to 10
> shards, s3cmd ls s3://bucket >/dev/null), sharding makes the bucket
> listing slower (not a lot, but a bit).
>
> Are there simple(ish) workflows we could use to demonstrate an
> improvement from index sharding?
>
> Thanks,
>
> Matthew
>
> [I understand that Luminous has dynamic resharding, but it seems a bit
> unstable for production use; is that still the case?]
>
>
> --
>  The Wellcome Sanger Institute is operated by Genome Research
>  Limited, a charity registered in England with number 1021457 and a
>  company registered in England with number 2742969, whose registered
>  office is 215 Euston Road, London, NW1 2BE.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] NFS-ganesha with RGW

2018-05-30 Thread Matt Benjamin
Hi Josef,

1. You do need the Ganesha fsal driver to be present;  I don't know
your platform and os version, so I couldn't look up what packages you
might need to install (or if the platform package does not build the
RGW fsal)
2. The most common reason for ganesha.nfsd to fail to bind to a port
is that a Linux kernel nfsd is already running--can you make sure
that's not the case;  meanwhile you -do- need rpcbind to be running

Matt

On Wed, May 30, 2018 at 6:03 AM, Josef Zelenka
 wrote:
> Hi everyone, i'm currently trying to set up a NFS-ganesha instance that
> mounts a RGW storage, however i'm not succesful in this. I'm running Ceph
> Luminous 12.2.4 and ubuntu 16.04. I tried compiling ganesha from
> source(latest version), however i didn't manage to get the mount running
> with that, as ganesha refused to bind to the ipv6 interface - i assume this
> is a ganesha issue, but i didn't find any relevant info on what might cause
> this - my network setup should allow for that. Then i installed ganesha-2.6
> from the official repos, set up the config for RGW as per the official howto
> http://docs.ceph.com/docs/master/radosgw/nfs/, but i'm getting:
> Could not dlopen module:/usr/lib/x86_64-linux-gnu/ganesha/libfsalrgw.so
> Error:/usr/lib/x86_64-linux-gnu/ganesha/libfsalrgw.so: cannot open shared
> object file: No such file or directory
> and lo and behold, the libfsalrgw.so isn't present in the folder. I
> installed the nfs-ganesha and nfs-ganesha-fsal packages. I tried googling
> around, but i didn't find any relevant info or walkthroughs for this setup,
> so i'm asking - was anyone succesful in setting this up? I can see that even
> the redhat solution is still in progress, so i'm not sure if this even
> works. Thanks for any help,
>
> Josef
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] NFS-ganesha with RGW

2018-05-30 Thread Matt Benjamin
Hi Josef,

The main thing to make sure is that you have set up the host/vm
running nfs-ganesha exactly as if it were going to run radosgw.  For
example, you need an appropriate keyring and ceph config.  If radosgw
starts and services requests, nfs-ganesha should too.

With the debug settings you've described, you should be able to see a
bunch of output when you run ganesha.nfsd with -F.  You should see the
FSAL starting up with lots of debug output.

Matt

On Wed, May 30, 2018 at 8:19 AM, Josef Zelenka
 wrote:
> Hi, thanks for the quick reply. As for 1. I mentioned that i'm running
> ubuntu 16.04, kernel 4.4.0-121 - as it seems the platform
> package(nfs-ganesha-ceph) does not include the rgw fsal.
>
> 2. Nfsd was running - after rebooting i managed to get ganesha to bind,
> rpcbind is running, though i still can't mount the rgw due to timeouts. I
> suspect my conf might be wrong, but i'm not sure how to make sure it is.
> I've set up my ganesha.conf with the FSAL and RGW block - do i need anything
> else?
>
> EXPORT
> {
>  Export_ID=1;
>  Path = "/";
>  Pseudo = "/";
>  Access_Type = RW;
>  SecType = "sys";
>  NFS_Protocols = 4;
>  Transport_Protocols = TCP;
>
>  # optional, permit unsquashed access by client "root" user
>  #Squash = No_Root_Squash;
>
> FSAL {
>  Name = RGW;
>  User_Id =  key/secret>;
>  Access_Key_Id = "";
>  Secret_Access_Key = "";
>  }
>
> RGW {
> cluster = "ceph";
> name = "client.radosgw.radosgw-s2";
> ceph_conf = "/etc/ceph/ceph.conf";
> init_args = "-d --debug-rgw=16";
> }
> }
> Josef
>
>
>
>
>
> On 30/05/18 13:18, Matt Benjamin wrote:
>>
>> Hi Josef,
>>
>> 1. You do need the Ganesha fsal driver to be present;  I don't know
>> your platform and os version, so I couldn't look up what packages you
>> might need to install (or if the platform package does not build the
>> RGW fsal)
>> 2. The most common reason for ganesha.nfsd to fail to bind to a port
>> is that a Linux kernel nfsd is already running--can you make sure
>> that's not the case;  meanwhile you -do- need rpcbind to be running
>>
>> Matt
>>
>> On Wed, May 30, 2018 at 6:03 AM, Josef Zelenka
>>  wrote:
>>>
>>> Hi everyone, i'm currently trying to set up a NFS-ganesha instance that
>>> mounts a RGW storage, however i'm not succesful in this. I'm running Ceph
>>> Luminous 12.2.4 and ubuntu 16.04. I tried compiling ganesha from
>>> source(latest version), however i didn't manage to get the mount running
>>> with that, as ganesha refused to bind to the ipv6 interface - i assume
>>> this
>>> is a ganesha issue, but i didn't find any relevant info on what might
>>> cause
>>> this - my network setup should allow for that. Then i installed
>>> ganesha-2.6
>>> from the official repos, set up the config for RGW as per the official
>>> howto
>>> http://docs.ceph.com/docs/master/radosgw/nfs/, but i'm getting:
>>> Could not dlopen module:/usr/lib/x86_64-linux-gnu/ganesha/libfsalrgw.so
>>> Error:/usr/lib/x86_64-linux-gnu/ganesha/libfsalrgw.so: cannot open shared
>>> object file: No such file or directory
>>> and lo and behold, the libfsalrgw.so isn't present in the folder. I
>>> installed the nfs-ganesha and nfs-ganesha-fsal packages. I tried googling
>>> around, but i didn't find any relevant info or walkthroughs for this
>>> setup,
>>> so i'm asking - was anyone succesful in setting this up? I can see that
>>> even
>>> the redhat solution is still in progress, so i'm not sure if this even
>>> works. Thanks for any help,
>>>
>>> Josef
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] swift capabilities support in radosgw

2018-01-26 Thread Matt Benjamin
Hi Syed,

RGW supports Swift /info in Luminous.

By default iirc those aren't at the root of the URL hierarchy, but
there's an option to change that, since last year, see
https://github.com/ceph/ceph/pull/10280.

Matt

On Fri, Jan 26, 2018 at 5:10 AM, Syed Armani <syed.arm...@hastexo.com> wrote:
> Hello folks,
>
>
> I am getting this error "Capabilities GET failed: https://SWIFT:8080/info 404 
> Not Found",
> when executing a "$ swift capabilities" command against a radosgw cluster.
>
>
> I was wondering whether radosgw supports the listing of activated 
> capabilities[0] via Swift API?
> Something a user can see with "$ swift capabilities" in a native swift 
> cluster.
>
>
> [0] 
> https://developer.openstack.org/api-ref/object-store/index.html#list-activated-capabilities
>
> Thanks!
>
> Cheers,
> Syed
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can't make LDAP work

2018-01-26 Thread Matt Benjamin
Hi Theofilos,

I'm not sure what's going wrong offhand, I see all the pieces in your writeup.

The first thing I would verify is that "CN=cephs3,OU=Users,OU=Organic
Units,DC=example,DC=com" see the users in
ldaps://ldap.example.com:636, and that "cn=myuser..." can itself
simple bind using standard tools.

What Ceph version are you running?

Matt

On Fri, Jan 26, 2018 at 5:27 AM, Theofilos Mouratidis
<mtheofi...@gmail.com> wrote:
> They gave me a ldap server working with users inside, and I want to create
> tokens for these users
>  to use s3 from their ldap credentials.
> I tried using the sanity check and I got this one working:
>
> ldapsearch -x -D "CN=cephs3,OU=Users,OU=Organic Units,DC=example,DC=com" -W
> -H ldaps://ldap.example.com:636 -b 'OU=Users,OU=Organic
> Units,DC=example,DC=com' 'cn=*' dn
>
> My config is like this:
> [global]
> rgw_ldap_binddn = "CN=cephs3,OU=Users,OU=Organic Units,DC=example,DC=com"
> rgw_ldap_dnattr = "cn"
> rgw_ldap_searchdn = "OU=Users,OU=Organic Units,DC=example,DC=com"
> rgw_ldap_secret = "plaintext_pass"
> rgw_ldap_uri = ldaps://ldap.example.com:636
> rgw_s3_auth_use_ldap = true
>
> I create my token to test the ldap feature:
>
> export RGW_ACCESS_KEY_ID="myuser" #where "dn: cn=myuser..." is in
> ldap.example.com
> export RGW_SECRET_ACCESS_KEY="mypass"
> radosgw-token --encode --ttype=ad
> abcad=
> radosgw-token --encode --ttype=ldap
> abcldap=
>
> Now I go to s3cmd and in config I have something like this:
> acess_key = abcad=
> secret_key =
> use_https = false
> host_base = ceph_rgw.example.com:8080
> host_bucket = ceph_rgw.example.com:8080
>
>
> I get access denied,
> then I try with the ldap key and I get the same problem.
> I created a local user out of curiosity and I put in s3cmd acess and secret
> and I could create a bucket. What am I doing wrong?
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OMAP warning ( again )

2018-09-01 Thread Matt Benjamin
",
> "bucket_id": "default.7320.3",
> "tenant": "",
> "explicit_placement": {
> "data_pool": ".rgw.buckets",
> "data_extra_pool": ".rgw.buckets.extra",
> "index_pool": ".rgw.buckets.index"
> }
> },
> "creation_time": "2016-03-09 17:23:50.00Z",
> "owner": "zz",
> "flags": 0,
> "zonegroup": "default",
> "placement_rule": "default-placement",
> "has_instance_obj": "true",
> "quota": {
> "enabled": false,
> "check_on_raw": false,
> "max_size": -1024,
> "max_size_kb": 0,
> "max_objects": -1
> },
> "num_shards": 0,
> "bi_shard_hash_type": 0,
> "requester_pays": "false",
> "has_website": "false",
> "swift_versioning": "false",
> "swift_ver_location": "",
> "index_type": 0,
> "mdsearch_config": [],
> "reshard_status": 0,
> "new_bucket_instance_id": ""
>
> When I run that shard setting to change the number of shards:
> "radosgw-admin reshard add --bucket=BKTEST --num-shards=2"
>
> Then run to get the status:
> "radosgw-admin reshard list"
>
> [
> {
> "time": "2018-08-01 21:58:13.306381Z",
> "tenant": "",
> "bucket_name": "BKTEST",
> "bucket_id": "default.7320.3",
> "new_instance_id": "",
> "old_num_shards": 1,
> "new_num_shards": 2
> }
> ]
>
> If it was 0, why does it say old_num_shards was 1?
>
> -Brent
>
> -Original Message-
> From: Brad Hubbard [mailto:bhubb...@redhat.com]
> Sent: Tuesday, July 31, 2018 9:07 PM
> To: Brent Kennedy 
> Cc: ceph-users 
> Subject: Re: [ceph-users] OMAP warning ( again )
>
> Search the cluster log for 'Large omap object found' for more details.
>
> On Wed, Aug 1, 2018 at 3:50 AM, Brent Kennedy  wrote:
>> Upgraded from 12.2.5 to 12.2.6, got a “1 large omap objects” warning
>> message, then upgraded to 12.2.7 and the message went away.  I just
>> added four OSDs to balance out the cluster ( we had some servers with
>> fewer drives in them; jbod config ) and now the “1 large omap objects”
>> warning message is back.  I did some googlefoo to try to figure out
>> what it means and then how to correct it, but the how to correct it is a bit 
>> vague.
>>
>>
>>
>> We use rados gateways for all storage, so everything is in the
>> .rgw.buckets pool, which I gather from research is why we are getting
>> the warning message ( there are millions of objects in there ).
>>
>>
>>
>> Is there an if/then process to clearing this error message?
>>
>>
>>
>> Regards,
>>
>> -Brent
>>
>>
>>
>> Existing Clusters:
>>
>> Test: Luminous 12.2.7 with 3 osd servers, 1 mon/man, 1 gateway ( all
>> virtual
>> )
>>
>> US Production: Firefly with 4 osd servers, 3 mons, 3 gateways behind
>> haproxy LB
>>
>> UK Production: Luminous 12.2.7 with 8 osd servers, 3 mons/man, 3
>> gateways behind haproxy LB
>>
>>
>>
>>
>>
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> Cheers,
> Brad
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> 
>
> NOTICE AND DISCLAIMER
> This e-mail (including any attachments) is intended for the above-named 
> person(s). If you are not the intended recipient, notify the sender 
> immediately, delete this email from your system and do not disclose or use 
> for any purpose. We may monitor all incoming and outgoing emails in line with 
> current legislation. We have taken steps to ensure that this email and 
> attachments are free from any virus, but it remains your responsibility to 
> ensure that viruses do not adversely affect you
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Civetweb log format

2018-03-08 Thread Matt Benjamin
Hi Yehuda,

I did add support for logging arbitrary headers, but not a
configurable log record a-la webservers.  To level set, David, are you
speaking about a file or pipe log sync on the RGW host?

Matt

On Thu, Mar 8, 2018 at 7:55 PM, Yehuda Sadeh-Weinraub <yeh...@redhat.com> wrote:
> On Thu, Mar 8, 2018 at 2:22 PM, David Turner <drakonst...@gmail.com> wrote:
>> I remember some time ago Yehuda had commented on a thread like this saying
>> that it would make sense to add a logging/auditing feature like this to RGW.
>> I haven't heard much about it since then, though.  Yehuda, do you remember
>> that and/or think that logging like this might become viable.
>
> I vaguely remember Matt was working on this. Matt?
>
> Yehuda
>
>>
>>
>> On Thu, Mar 8, 2018 at 4:17 PM Aaron Bassett <aaron.bass...@nantomics.com>
>> wrote:
>>>
>>> Yea thats what I was afraid of. I'm looking at possibly patching to add
>>> it, but i really dont want to support my own builds. I suppose other
>>> alternatives are to use proxies to log stuff, but that makes me sad.
>>>
>>> Aaron
>>>
>>>
>>> On Mar 8, 2018, at 12:36 PM, David Turner <drakonst...@gmail.com> wrote:
>>>
>>> Setting radosgw debug logging to 10/10 is the only way I've been able to
>>> get the access key in the logs for requests.  It's very unfortunate as it
>>> DRASTICALLY increases the amount of log per request, but it's what we needed
>>> to do to be able to have the access key in the logs along with the request.
>>>
>>> On Tue, Mar 6, 2018 at 3:09 PM Aaron Bassett <aaron.bass...@nantomics.com>
>>> wrote:
>>>>
>>>> Hey all,
>>>> I'm trying to get something of an audit log out of radosgw. To that end I
>>>> was wondering if theres a mechanism to customize the log format of 
>>>> civetweb.
>>>> It's already writing IP, HTTP Verb, path, response and time, but I'm hoping
>>>> to get it to print the Authorization header of the request, which 
>>>> containers
>>>> the access key id which we can tie back into the systems we use to issue
>>>> credentials. Any thoughts?
>>>>
>>>> Thanks,
>>>> Aaron
>>>> CONFIDENTIALITY NOTICE
>>>> This e-mail message and any attachments are only for the use of the
>>>> intended recipient and may contain information that is privileged,
>>>> confidential or exempt from disclosure under applicable law. If you are not
>>>> the intended recipient, any disclosure, distribution or other use of this
>>>> e-mail message or attachments is prohibited. If you have received this
>>>> e-mail message in error, please delete and notify the sender immediately.
>>>> Thank you.
>>>>
>>>> ___
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ganesha-rgw export with LDAP auth

2018-03-09 Thread Matt Benjamin
Hi Benjeman,

It is -intended- to work, identically to the standalone radosgw
server.  I can try to verify whether there could be a bug affecting
this path.

Matt

On Fri, Mar 9, 2018 at 12:01 PM, Benjeman Meekhof <bmeek...@umich.edu> wrote:
> I'm having issues exporting a radosgw bucket if the configured user is
> authenticated using the rgw ldap connectors.  I've verified that this
> same ldap token works ok for other clients, and as I'll note below it
> seems like the rgw instance is contacting the LDAP server and
> successfully authenticating the user.  Details:
>
> Ganesha export:
>  FSAL {
> Name = RGW;
> User_Id = "";
>
> Access_Key_Id =
> "eyJSR1dfVE9LRU4iOnsidmVyc2lvbiI6MSwidHlwZSI6ImxkYXAiLCJpZCI6ImJtZWVraG9mX29zaXJpc2FkbWluIiwia2V$
>
> # Secret_Access_Key =
> "eyJSR1dfVE9LRU4iOnsidmVyc2lvbiI6MSwidHlwZSI6ImxkYXAiLCJpZCI6ImJtZWVraG9mX29zaXJpc2FkbWluI$
> # Secret_Access_Key = "weW\/XGiHfcVhtH3chUTyoF+uz9Ldz3Hz";
>
> }
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Fixing bad radosgw index

2018-04-23 Thread Matt Benjamin
Mimic (and higher) contain a new async gc mechanism, which should
handle this workload internally.

Matt

On Mon, Apr 23, 2018 at 2:55 PM, David Turner <drakonst...@gmail.com> wrote:
> When figuring out why space is not freeing up after deleting buckets and
> objects in RGW, look towards the RGW Garbage Collection.  This has come up
> on the ML several times in the past.  I am almost finished catching up on a
> GC of 200 Million objects that was taking up a substantial amount of space
> in my cluster.  I did this by running about 30 screens with the command
> `while true; do radosgw-admin gc process; sleep 10; done` in each of them.
> It appears that there are 32 available sockets for the gc to be processed
> and this helped us catch up on 200M objects in under 2 months.
>
> On Mon, Apr 16, 2018 at 12:01 PM Robert Stanford <rstanford8...@gmail.com>
> wrote:
>>
>>
>>  This doesn't work for me:
>>
>> for i in `radosgw-admin bucket list`; do radosgw-admin bucket unlink
>> --bucket=$i --uid=myuser; done   (tried with and without '=')
>>
>>  Errors for each bucket:
>>
>> failure: (2) No such file or directory2018-04-16 15:37:54.022423
>> 7f7c250fbc80  0 could not get bucket info for bucket="bucket5",
>>
>> On Mon, Apr 16, 2018 at 8:30 AM, Casey Bodley <cbod...@redhat.com> wrote:
>>>
>>>
>>>
>>> On 04/14/2018 12:54 PM, Robert Stanford wrote:
>>>
>>>
>>>  I deleted my default.rgw.buckets.data and default.rgw.buckets.index
>>> pools in an attempt to clean them out.  I brought this up on the list and
>>> received replies telling me essentially, "You shouldn't do that."  There was
>>> however no helpful advice on recovering.
>>>
>>>  When I run 'radosgw-admin bucket list' I get a list of all my old
>>> buckets (I thought they'd be cleaned out when I deleted and recreated
>>> default.rgw.buckets.index, but I was wrong.)  Deleting them with s3cmd and
>>> radosgw-admin does nothing; they still appear (though s3cmd will give a
>>> '404' error.)  Running radosgw-admin with 'bucket check' and '--fix' does
>>> nothing as well.  So, how do I get myself out of this mess.
>>>
>>>  On another, semi-related note, I've been deleting (existing) buckets and
>>> their contents with s3cmd (and --recursive); the space is never freed from
>>> ceph and the bucket still appears in s3cmd ls.  Looks like my radosgw has
>>> several issues, maybe all related to deleting and recreating the pools.
>>>
>>>  Thanks
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>> The 'bucket list' command takes a user and prints the list of buckets
>>> they own - this list is read from the user object itself. You can remove
>>> these entries with the 'bucket unlink' command.
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW GC Processing Stuck

2018-04-24 Thread Matt Benjamin
process: removing
> .rgw.buckets:default.175209462.16__shadow_.06ry24pXQW8yH8EJpoqjEtZF6M6tiUv_12
>
>
>
> We seem completely unable to get this deleted, and nothing else of immediate
> concern is flagging up as a potential cause of all RGWs become unresponsive
> at the same time. On the bucket containing this object (the one we
> originally tried to purge), I have attempted a further purge passing the
> “—bypass-gc” parameter to it, but this also resulted in all rgws becoming
> unresponsive within 30 minutes and so I terminated the operation and
> restarted the rgws again.
>
>
>
> The bucket we attempted to remove has no shards and I have attached the
> details below. 90% of the contents of the bucket have already been
> successfully removed to our knowledge, and the bucket had no sharding (old
> bucket, sharding is now active for new buckets).
>
>
>
> root@ceph-rgw-1:~# radosgw-admin --id rgw.ceph-rgw-1 bucket stats
> --bucket=
>
> {
>
> "bucket": "",
>
> "pool": ".rgw.buckets",
>
> "index_pool": ".rgw.buckets.index",
>
> "id": "default.290071.4",
>
> "marker": "default.290071.4",
>
> "owner": "yy",
>
> "ver": "0#107938549",
>
> "master_ver": "0#0",
>
> "mtime": "2014-10-24 14:58:48.955805",
>
> "max_marker": "0#",
>
> "usage": {
>
> "rgw.none": {
>
> "size_kb": 0,
>
> "size_kb_actual": 0,
>
> "num_objects": 0
>
> },
>
> "rgw.main": {
>
> "size_kb": 186685939,
>
> "size_kb_actual": 189914068,
>
> "num_objects": 1419528
>
> },
>
> "rgw.multimeta": {
>
> "size_kb": 0,
>
> "size_kb_actual": 0,
>
> "num_objects": 24
>
> }
>
> },
>
> "bucket_quota": {
>
> "enabled": false,
>
> "max_size_kb": -1,
>
> "max_objects": -1
>
> }
>
> }
>
>
>
> If anyone has any thoughts, they’d be greatly appreciated!
>
>
>
> Kind Regards,
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Getting a public file from radosgw

2018-03-28 Thread Matt Benjamin
niqueid
>>> <https://radosgw.example.com/uniqueid>
>>>  ___
>>>  ceph-users mailing list
>>>  ceph-users@lists.ceph.com
>>>      http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>>>
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [rgw] civetweb behind haproxy doesn't work with absolute URI

2018-03-31 Thread Matt Benjamin
I think if you haven't defined it in the Ceph config, it's disabled?

Matt

On Sat, Mar 31, 2018 at 4:59 PM, Rudenko Aleksandr <arude...@croc.ru> wrote:
> Hi, Sean.
>
> Thank you for the reply.
>
> What does it mean: “We had to disable "rgw dns name" in the end”?
>
> "rgw_dns_name": “”, has no effect for me.
>
>
>
> On 29 Mar 2018, at 11:23, Sean Purdy <s.pu...@cv-library.co.uk> wrote:
>
> We had something similar recently.  We had to disable "rgw dns name" in the
> end
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Apply bucket policy to bucket for LDAP user: what is the correct identifier for principal

2018-10-11 Thread Matt Benjamin
right, the user can be the dn component or something else projected
from the entry, details in the docs

Matt

On Thu, Oct 11, 2018 at 1:26 PM, Adam C. Emerson  wrote:
> Ha Son Hai  wrote:
>> Hello everyone,
>> I try to apply the bucket policy to my bucket for LDAP user but it doesn't 
>> work.
>> For user created by radosgw-admin, the policy works fine.
>>
>> {
>>
>>   "Version": "2012-10-17",
>>
>>   "Statement": [{
>>
>> "Effect": "Allow",
>>
>> "Principal": {"AWS": ["arn:aws:iam:::user/radosgw-user"]},
>>
>> "Action": "s3:*",
>>
>> "Resource": [
>>
>>   "arn:aws:s3:::shared-tenant-test",
>>
>>   "arn:aws:s3:::shared-tenant-test/*"
>>
>> ]
>>
>>   }]
>>
>> }
>
> LDAP users essentially are RGW users, so it should be this same
> format. As I understand RGW's LDAP interface (I have not worked with
> LDAP personally), every LDAP users get a corresponding RGW user whose
> name is derived from rgw_ldap_dnattr, often 'uid' or 'cn', but this is
> dependent on site.
>
> If you, can check that part of configuration, and if that doesn't work
> if you'll send some logs I'll take a look. If something fishy is going
> on we can try opening a bug.
>
> Thank you.
>
> --
> Senior Software Engineer   Red Hat Storage, Ann Arbor, MI, US
> IRC: Aemerson@OFTC, Actinic@Freenode
> 0x80F7544B90EDBFB9 E707 86BA 0C1B 62CC 152C  7C12 80F7 544B 90ED BFB9
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous 12.2.5 - crushable RGW

2018-10-24 Thread Matt Benjamin
 buffer
>>0/ 1 timer
>>0/ 1 filer
>>0/ 1 striper
>>0/ 1 objecter
>>0/ 5 rados
>>0/ 5 rbd
>>0/ 5 rbd_mirror
>>0/ 5 rbd_replay
>>0/ 5 journaler
>>0/ 5 objectcacher
>>0/ 5 client
>>1/ 5 osd
>>0/ 5 optracker
>>0/ 5 objclass
>>1/ 3 filestore
>>1/ 3 journal
>>0/ 5 ms
>>1/ 5 mon
>>0/10 monc
>>1/ 5 paxos
>>0/ 5 tp
>>1/ 5 auth
>>1/ 5 crypto
>>1/ 1 finisher
>>1/ 1 reserver
>>1/ 5 heartbeatmap
>>1/ 5 perfcounter
>>1/ 5 rgw
>>1/10 civetweb
>>1/ 5 javaclient
>>1/ 5 asok
>>1/ 1 throttle
>>0/ 0 refs
>>1/ 5 xio
>>1/ 5 compressor
>>1/ 5 bluestore
>>1/ 5 bluefs
>>1/ 3 bdev
>>1/ 5 kstore
>>4/ 5 rocksdb
>>4/ 5 leveldb
>>4/ 5 memdb
>>1/ 5 kinetic
>>1/ 5 fuse
>>1/ 5 mgr
>>1/ 5 mgrc
>>1/ 5 dpdk
>>1/ 5 eventtrace
>>   -2/-2 (syslog threshold)
>>   -1/-1 (stderr threshold)
>>   max_recent 1
>>   max_new 1000
>>   log_file /var/log/ceph/ceph-client.rgw.ceph-node-01.log
>> --- end dump of recent events ---
>> 2018-07-13 05:22:16.176189 7fcce6575700 -1 *** Caught signal (Aborted) **
>>  in thread 7fcce6575700 thread_name:civetweb-worker
>>
>>
>>
>> Any trick to enable rgw_dynamic_resharding only during offpeah hours? Is
>> 'rgw_dynamic_resharding' a injectable setting with no need to restart RGWs?
>> So far, I only mananged to change it via configuration file with RGW service
>> restart.
>>
>> Regarding `ERROR: flush_read_list(): d->client_cb->handle_data() returned
>> -5`, I'll try to increaste timeouts on nginx side.
>>
>> Any help with this error `/build/ceph-12.2.5/src/common/buffer.cc: 1967:
>> FAILED assert(len+off <= bp.length())` which seems to directly impact RGW
>> service state ? Could it be caused by lot of GET requests for big files ? It
>> happens only after the flood of `ERROR: flush_read_list()`.
>>
>> Thanks
>> Jakub
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] inexplicably slow bucket listing at top level

2018-11-05 Thread Matt Benjamin
Hi,

I just did some testing to confirm, and can report, with "mc ls -r"
appears to be definitely inducing latency related to Unix path
emulation.

Matt

On Mon, Nov 5, 2018 at 3:10 PM, J. Eric Ivancich  wrote:
> I did make an inquiry and someone here does have some experience w/ the
> mc command -- minio client. We're curious how "ls -r" is implemented
> under mc. Does it need to get a full listing and then do some path
> parsing to produce nice output? If so, it may be playing a role in the
> delay as well.
>
> Eric
>
> On 9/26/18 5:27 PM, Graham Allan wrote:
>> I have one user bucket, where inexplicably (to me), the bucket takes an
>> eternity to list, though only on the top level. There are two
>> subfolders, each of which lists individually at a completely normal
>> speed...
>>
>> eg (using minio client):
>>
>>> [~] % time ./mc ls fried/friedlab/
>>> [2018-09-26 16:15:48 CDT] 0B impute/
>>> [2018-09-26 16:15:48 CDT] 0B wgs/
>>>
>>> real1m59.390s
>>>
>>> [~] % time ./mc ls -r fried/friedlab/
>>> ...
>>> real 3m18.013s
>>>
>>> [~] % time ./mc ls -r fried/friedlab/impute
>>> ...
>>> real 0m13.512s
>>>
>>> [~] % time ./mc ls -r fried/friedlab/wgs
>>> ...
>>> real 0m6.437s
>>
>> The bucket has about 55k objects total, with 32 index shards on a
>> replicated ssd pool. It shouldn't be taking this long but I can't
>> imagine what could be causing this. I haven't found any others behaving
>> this way. I'd think it has to be some problem with the bucket index, but
>> what...?
>>
>> I did naively try some "radosgw-admin bucket check [--fix]" commands
>> with no change.
>>
>> Graham
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] OMAP size on disk

2018-10-09 Thread Matt Benjamin
Hi Luis,

There are currently open issues with space reclamation after dynamic
bucket index resharding, esp. http://tracker.ceph.com/issues/34307

Changes are being worked on to address this, and to permit
administratively reclaiming space.

Matt

On Tue, Oct 9, 2018 at 5:50 AM, Luis Periquito  wrote:
> Hi all,
>
> I have several clusters, all running Luminous (12.2.7) proving S3
> interface. All of them have enabled dynamic resharding and is working.
>
> One of the newer clusters is starting to give warnings on the used
> space for the OMAP directory. The default.rgw.buckets.index pool is
> replicated with 3x copies of the data.
>
> I created a new crush ruleset to only use a few well known SSDs, and
> the OMAP directory size changed as expected: if I set the OSD as out
> and them tell to compact, the size of the OMAP will shrink. If I set
> the OSD as in the OMAP will grow to its previous state. And while the
> backfill is going we get loads of key recoveries.
>
> Total physical space for OMAP in the OSDs that have them is ~1TB, so
> given a 3x replica ~330G before replication.
>
> The data size for the default.rgw.buckets.data is just under 300G.
> There is one bucket who has ~1.7M objects and 22 shards.
>
> After deleting that bucket the size of the database didn't change -
> even after running gc process and telling the OSD to compact its
> database.
>
> This is not happening in older clusters, i.e created with hammer.
> Could this be a bug?
>
> I looked at getting all the OMAP keys and sizes
> (https://ceph.com/geen-categorie/get-omap-keyvalue-size/) and they add
> up to close the value I expected them to take, looking at the physical
> storage.
>
> Any ideas where to look next?
>
> thanks for all the help.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Multi tenanted radosgw and existing Keystone users/tenants

2018-12-05 Thread Matt Benjamin
This capability is stable and should merge to master shortly.

Matt
On Wed, Dec 5, 2018 at 11:24 AM Florian Haas  wrote:
>
> Hi Mark,
>
> On 04/12/2018 04:41, Mark Kirkwood wrote:
> > Hi,
> >
> > I've set up a Luminous RGW with Keystone integration, and subsequently set
> >
> > rgw keystone implicit tenants = true
> >
> > So now all newly created users/tenants (or old ones that never accessed
> > RGW) get their own namespaces. However there are some pre-existing users
> > that have created buckets and objects - and these are in the global
> > namespace. Is there any way to move the existing buckets and objects to
> > private namespaces and change these users to use said private namespaces?
>
> It looks like you're running into the issue described in this PR:
> https://github.com/ceph/ceph/pull/23994
>
> Sooo... bit complicated, fix still pending.
>
> Cheers,
> Florian
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW Swift metadata dropped when S3 bucket versioning enabled

2018-12-05 Thread Matt Benjamin
Agree, please file a tracker issue with the info, we'll prioritize
reproducing it.

Cheers,

Matt
On Wed, Dec 5, 2018 at 11:42 AM Florian Haas  wrote:
>
> On 05/12/2018 17:35, Maxime Guyot wrote:
> > Hi Florian,
> >
> > Thanks for the help. I did further testing and narrowed it down to
> > objects that have been uploaded when the bucket has versioning enabled.
> > Objects created before that are not affected: all metadata operations
> > are still possible.
> >
> > Here is a simple way to reproduce
> > this: http://paste.openstack.org/show/736713/
> > And here is the snippet to easily turn on/off S3 versioning on a given
> > bucket: https://gist.github.com/Miouge1/b8ae19b71411655154e74e609b61f24e
> >
> > Cheers,
> > Maxime
>
> All right, by my reckoning this would very much look like a bug then.
> You probably want to chuck an issue for this into
> https://tracker.ceph.com/projects/rgw.
>
> Out of curiosity, are you also seeing Swift metadata getting borked when
> you're enabling *Swift* versioning? (Wholly different animal, I know,
> but still worth taking a look I think.)
>
> Cheers
> Florian
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Radosgw s3 subuser permissions

2019-01-24 Thread Matt Benjamin
Hi Marc,

I'm not actually certain whether the traditional ACLs permit any
solution for that, but I believe with bucket policy, you can achieve
precise control within and across tenants, for any set of desired
resources (buckets).

Matt

On Thu, Jan 24, 2019 at 3:18 PM Marc Roos  wrote:
>
>
> It is correct that it is NOT possible for s3 subusers to have different
> permissions on folders created by the parent account?
> Thus the --access=[ read | write | readwrite | full ] is for everything
> the parent has created, and it is not possible to change that for
> specific folders/buckets?
>
> radosgw-admin subuser create --uid='Company$archive' --subuser=testuser
> --key-type=s3
>
> Thus if archive created this bucket/folder structure.
> └── bucket
> ├── folder1
> ├── folder2
> └── folder3
> └── folder4
>
> It is not possible to allow testuser to only write in folder2?
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW: Reshard index of non-master zones in multi-site

2019-04-07 Thread Matt Benjamin
gt; > .dir.0ef1a91a-4aee-427e-bdf8-30589abb2d3e.97248676.1.8
> > >
> > > I would assume then that unlike what documentation says, it's safe to
> > > run 'reshard stale-instances rm' on a multi-site setup.
> > >
> > > However it is quite telling if the author of this feature doesn't
> > > trust what they have written to work correctly.
> > >
> > > There are still thousands of stale index objects that 'stale-instances
> > > list' didn't pick up though.  But it appears that radosgw-admin only
> > > looks at 'metadata list bucket' data, and not what is physically
> > > inside the pool.
> > >
> > > --
> > > Iain Buclaw
> > >
> > > *(p < e ? p++ : p) = (c & 0x0f) + '0';
> > > ___
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
>
>
> --
> Christian BalzerNetwork/Systems Engineer
> ch...@gol.com   Rakuten Communications
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Need clarification about RGW S3 Bucket Tagging

2019-03-14 Thread Matt Benjamin
Hi Konstantin,

Luminous does not support bucket tagging--although I've done Luminous
backports for downstream use, and would be willing to help with
upstream backports if there is community support.

Matt

On Thu, Mar 14, 2019 at 9:53 AM Konstantin Shalygin  wrote:
>
> On 3/14/19 8:36 PM, Casey Bodley wrote:
> > The bucket policy documentation just lists which actions the policy
> > engine understands. Bucket tagging isn't supported, so those requests
> > were misinterpreted as normal PUT requests to create a bucket. I
> > opened https://github.com/ceph/ceph/pull/26952 to return 405 Method
> > Not Allowed there instead and update the doc to clarify that it's not
> > supported.
>
> As I understand correct, that:
>
> - Luminous: support object tagging.
>
> - Mimic+: support object tagging and lifecycle policing on this tags [1].
>
> ?
>
>
> Thanks,
>
> k
>
> [1] https://tracker.ceph.com/issues/24011
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Need clarification about RGW S3 Bucket Tagging

2019-03-14 Thread Matt Benjamin
Yes, sorry to misstate that.  I was conflating with lifecycle
configuration support.

Matt

On Thu, Mar 14, 2019 at 10:06 AM Konstantin Shalygin  wrote:
>
> On 3/14/19 8:58 PM, Matt Benjamin wrote:
> > Sorry, object tagging.  There's a bucket tagging question in another thread 
> > :)
>
> Luminous works fine with object tagging, at least on 12.2.11
> getObjectTagging and putObjectTagging.
>
>
> [k0ste@WorkStation]$ curl -s
> https://rwg_civetweb/my_bucket/empty-file.txt?tagging | xq '.Tagging[]'
> "http://s3.amazonaws.com/doc/2006-03-01/;
> {
>"Tag": {
>  "Key": "User",
>  "Value": "Bob"
>}
> }
>
>
>
> k
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Need clarification about RGW S3 Bucket Tagging

2019-03-14 Thread Matt Benjamin
Sorry, object tagging.  There's a bucket tagging question in another thread :)

Matt

On Thu, Mar 14, 2019 at 9:58 AM Matt Benjamin  wrote:
>
> Hi Konstantin,
>
> Luminous does not support bucket tagging--although I've done Luminous
> backports for downstream use, and would be willing to help with
> upstream backports if there is community support.
>
> Matt
>
> On Thu, Mar 14, 2019 at 9:53 AM Konstantin Shalygin  wrote:
> >
> > On 3/14/19 8:36 PM, Casey Bodley wrote:
> > > The bucket policy documentation just lists which actions the policy
> > > engine understands. Bucket tagging isn't supported, so those requests
> > > were misinterpreted as normal PUT requests to create a bucket. I
> > > opened https://github.com/ceph/ceph/pull/26952 to return 405 Method
> > > Not Allowed there instead and update the doc to clarify that it's not
> > > supported.
> >
> > As I understand correct, that:
> >
> > - Luminous: support object tagging.
> >
> > - Mimic+: support object tagging and lifecycle policing on this tags [1].
> >
> > ?
> >
> >
> > Thanks,
> >
> > k
> >
> > [1] https://tracker.ceph.com/issues/24011
> >
> >
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
>
>
> --
>
> Matt Benjamin
> Red Hat, Inc.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
>
> http://www.redhat.com/en/technologies/storage
>
> tel.  734-821-5101
> fax.  734-769-8938
> cel.  734-216-5309



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW ops log lag?

2019-04-12 Thread Matt Benjamin
Hi Aaron,

I don't think that exists currently.

Matt

On Fri, Apr 12, 2019 at 11:12 AM Aaron Bassett
 wrote:
>
> I have an radogw log centralizer that we use to for an audit trail for data 
> access in our ceph clusters. We've enabled the ops log socket and added 
> logging of the http_authorization header to it:
>
> rgw log http headers = "http_authorization"
> rgw ops log socket path = /var/run/ceph/rgw-ops.sock
> rgw enable ops log = true
>
> We have a daemon that listens on the ops socket, extracts/manipulates some 
> information from the ops log, and sends it off to our log aggregator.
>
> This setup works pretty well for the most part, except when the cluster comes 
> under heavy load, it can get _very_ laggy - sometimes up to several hours 
> behind. I'm having a hard time nailing down whats causing this lag. The 
> daemon is rather naive, basically just some nc with jq in between, but the 
> log aggregator has plenty of spare capacity, so I don't think its slowing 
> down how fast the daemon is consuming from the socket.
>
> I was revisiting the documentation about this ops log and noticed the 
> following which I hadn't seen previously:
>
> When specifying a UNIX domain socket, it is also possible to specify the 
> maximum amount of memory that will be used to keep the data backlog:
> rgw ops log data backlog = 
> Any backlogged data in excess to the specified size will be lost, so the 
> socket needs to be read constantly.
>
> I'm wondering if theres a way I can query radosgw for the current size of 
> that backlog to help me narrow down where the bottleneck may be occuring.
>
> Thanks,
> Aaron
>
>
>
> CONFIDENTIALITY NOTICE
> This e-mail message and any attachments are only for the use of the intended 
> recipient and may contain information that is privileged, confidential or 
> exempt from disclosure under applicable law. If you are not the intended 
> recipient, any disclosure, distribution or other use of this e-mail message 
> or attachments is prohibited. If you have received this e-mail message in 
> error, please delete and notify the sender immediately. Thank you.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Try to log the IP in the header X-Forwarded-For with radosgw behind haproxy

2019-04-16 Thread Matt Benjamin
Hi Francois,

Why is using an explicit unix socket problematic for you?  For what it
does, that decision has always seemed sensible.

Matt

On Tue, Apr 16, 2019 at 7:04 PM Francois Lafont
 wrote:
>
> Hi @all,
>
> On 4/9/19 12:43 PM, Francois Lafont wrote:
>
> > I have tried this config:
> >
> > -
> > rgw enable ops log  = true
> > rgw ops log socket path = /tmp/opslog
> > rgw log http headers= http_x_forwarded_for
> > -
> >
> > and I have logs in the socket /tmp/opslog like this:
> >
> > -
> > {"bucket":"test1","time":"2019-04-09 
> > 09:41:18.188350Z","time_local":"2019-04-09 
> > 11:41:18.188350","remote_addr":"10.111.222.51","user":"flaf","operation":"GET","uri":"GET
> >  /?prefix=toto/=%2F 
> > HTTP/1.1","http_status":"200","error_code":"","bytes_sent":832,"bytes_received":0,"object_size":0,"total_time":39,"user_agent":"DragonDisk
> >  1.05 ( http://www.dragondisk.com 
> > )","referrer":"","http_x_headers":[{"HTTP_X_FORWARDED_FOR":"10.111.222.55"}]},
> > -
> >
> > I can see the IP address of the client in the value of 
> > HTTP_X_FORWARDED_FOR, that's cool.
> >
> > But I don't understand why there is a specific socket to log that? I'm 
> > using radosgw in a Docker container (installed via ceph-ansible) and I have 
> > logs of the "radosgw" daemon in the "/var/log/syslog" file of my host (I'm 
> > using the Docker "syslog" log-driver).
> >
> > 1. Why is there a _separate_ log source for that? Indeed, in 
> > "/var/log/syslog" I have already some logs of civetweb. For instance:
> >
> >  2019-04-09 12:33:45.926 7f02e021c700  1 civetweb: 0x55876dc9c000: 
> > 10.111.222.51 - - [09/Apr/2019:12:33:45 +0200] "GET 
> > /?prefix=toto/=%2F HTTP/1.1" 200 1014 - DragonDisk 1.05 ( 
> > http://www.dragondisk.com )
>
> The fact that radosgw uses a separate log source for "ops log" (ie a specific 
> Unix socket) is still very mysterious for me.
>
>
> > 2. In my Docker container context, is it possible to put the logs above in 
> > the file "/var/log/syslog" of my host, in other words is it possible to 
> > make sure to log this in stdout of the daemon "radosgw"?
>
> It seems to me impossible to put ops log in the stdout of the "radosgw" 
> process (or, if it's possible, I have not found). So I have made a 
> workaround. I have set:
>
>  rgw_ops_log_socket_path = /var/run/ceph/rgw-opslog.asok
>
> in my ceph.conf and I have created a daemon (via un systemd unit file) which 
> runs this loop:
>
>  while true;
>  do
>  netcat -U "/var/run/ceph/rgw-opslog.asok" | logger -t "rgwops" -p 
> "local5.notice"
>  done
>
> to retrieve logs in syslog. It's not very satisfying but it's works.
>
> --
> François (flaf)
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-03 Thread Matt Benjamin
9-05-03 10:42:36.439 7f65f33db700  2 req 115:0s:s3:GET 
> /[bucketname]/:list_bucket:normalizing buckets and tenants
>
> 2019-05-03 10:42:36.439 7f65f33db700  2 req 115:0s:s3:GET 
> /[bucketname]/:list_bucket:init permissions
>
> 2019-05-03 10:42:36.439 7f65f33db700  2 req 115:0s:s3:GET 
> /[bucketname]/:list_bucket:recalculating target
>
> 2019-05-03 10:42:36.439 7f65f33db700  2 req 115:0s:s3:GET 
> /[bucketname]/:list_bucket:reading permissions
>
> 2019-05-03 10:42:36.439 7f65f33db700  2 req 115:0s:s3:GET 
> /[bucketname]/:list_bucket:init op
>
> 2019-05-03 10:42:36.439 7f65f33db700  2 req 115:0s:s3:GET 
> /[bucketname]/:list_bucket:verifying op mask
>
> 2019-05-03 10:42:36.439 7f65f33db700  2 req 115:0s:s3:GET 
> /[bucketname]/:list_bucket:verifying op permissions
>
> 2019-05-03 10:42:36.439 7f65f33db700  2 req 115:0s:s3:GET 
> /[bucketname]/:list_bucket:verifying op params
>
> 2019-05-03 10:42:36.439 7f65f33db700  2 req 115:0s:s3:GET 
> /[bucketname]/:list_bucket:pre-executing
>
> 2019-05-03 10:42:36.439 7f65f33db700  2 req 115:0s:s3:GET 
> /[bucketname]/:list_bucket:executing
>
> 2019-05-03 10:42:53.026 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:43:15.027 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:43:37.028 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:43:59.027 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:44:21.028 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:44:43.027 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:45:05.027 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:45:18.260 7f660cc0e700  2 object expiration: start
>
> 2019-05-03 10:45:18.779 7f660cc0e700  2 object expiration: stop
>
> 2019-05-03 10:45:27.027 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:45:49.027 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:46:11.027 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:46:33.027 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:46:55.028 7f660e411700  2 
> RGWDataChangesLog::ChangesRenewThread: start
>
> 2019-05-03 10:47:02.092 7f65f33db700  2 req 115:265.652s:s3:GET 
> /[bucketname]/:list_bucket:completing
>
> 2019-05-03 10:47:02.092 7f65f33db700  2 req 115:265.652s:s3:GET 
> /[bucketname]/:list_bucket:op status=0
>
> 2019-05-03 10:47:02.092 7f65f33db700  2 req 115:265.652s:s3:GET 
> /[bucketname]/:list_bucket:http status=200
>
> 2019-05-03 10:47:02.092 7f65f33db700  1 == req done req=0x55eba26e8970 op 
> status=0 http_status=200 ==
>
>
>
>
>
> radosgw-admin bucket limit check
>
>  }
>
> "bucket": "[BUCKETNAME]",
>
> "tenant": "",
>
> "num_objects": 7126133,
>
> "num_shards": 128,
>
> "objects_per_shard": 55672,
>
> "fill_status": "OK"
>
> },
>
>
>
>
>
> We ‘realy don’t know who to solve that , looks like a timeout or slow 
> performance for that bucket.
>
>
>
> Our RGW section in ceph.conf
>
>
>
> [client.rgw.ceph-rgw01]
>
> host = ceph-rgw01
>
> rgw enable usage log = true
>
> rgw dns name = XX
>
> rgw frontends = "beast port=7480"
>
> rgw resolve cname = false
>
> rgw thread pool size = 128
>
> rgw num rados handles = 1
>
> rgw op thread timeout = 120
>
>
>
>
>
> [client.rgw.ceph-rgw03]
>
> host = ceph-rgw03
>
> rgw enable usage log = true
>
> rgw dns name = 
>
> rgw frontends = "beast port=7480"
>
> rgw resolve cname = false
>
> rgw thread pool size = 640
>
> rgw num rados handles = 16
>
> rgw op thread timeout = 120
>
>
>
>
>
> Best Regards,
>
>
>
> Manuel
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket

2019-05-03 Thread Matt Benjamin
-5516-4b83-864c-d2b05b9db5af/CBB_DFTDI/CBB_DiskImage/Disk_011c3bdb-ac85-41f4-b727-c263001ba42f/Volume_Unknown_fbf0ea7a-af96-4dd4-9ad5-dbf6efdeefdc%24/20190430074414/0.cbrevision:get_obj:op
>  status=-104
> 2019-05-03 15:37:28.959 7f4a68484700  2 req 23574:41.87s:s3:GET 
> /CBERRY/MBS-69e38e26-5516-4b83-864c-d2b05b9db5af/CBB_DFTDI/CBB_DiskImage/Disk_011c3bdb-ac85-41f4-b727-c263001ba42f/Volume_Unknown_fbf0ea7a-af96-4dd4-9ad5-dbf6efdeefdc%24/20190430074414/0.cbrevision:get_obj:http
>  status=206
> 2019-05-03 15:37:28.959 7f4a68484700  1 == req done req=0x55f2fde20970 op 
> status=-104 http_status=206 ==
>
>
> -Mensaje original-
> De: EDH - Manuel Rios Fernandez 
> Enviado el: viernes, 3 de mayo de 2019 15:12
> Para: 'Matt Benjamin' 
> CC: 'ceph-users' 
> Asunto: RE: [ceph-users] RGW Bucket unable to list buckets 100TB bucket
>
> Hi Matt,
>
> Thanks for your help,
>
> We have done the changes plus a reboot of MONs and RGW they look like strange 
> stucked , now we're able to list  250 directories.
>
> time s3cmd ls s3://datos101 --no-ssl --limit 150
> real2m50.854s
> user0m0.147s
> sys 0m0.042s
>
>
> Is there any recommendation of max_shard ?
>
> Our main goal is cold storage, normally our usage are backups or customers 
> tons of files. This cause that customers in single bucket store millions 
> objetcs.
>
> Its strange because this issue started on Friday without any warning error at 
> OSD / RGW logs.
>
> When you should warning customer that will not be able to list their 
> directory if they reach X Millions objetcs?
>
> Our current ceph.conf
>
> #Normal-Memory 1/5
> debug rgw = 2
> #Disable
> debug osd = 0
> debug journal = 0
> debug ms = 0
>
> fsid = e1ee8086-7cce-43fd-a252-3d677af22428
> mon_initial_members = CEPH001, CEPH002, CEPH003 mon_host = 
> 172.16.2.10,172.16.2.11,172.16.2.12
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> osd pool default pg num = 128
> osd pool default pgp num = 128
>
> public network = 172.16.2.0/24
> cluster network = 172.16.1.0/24
>
> osd pool default size = 2
> osd pool default min size = 1
>
> rgw_dynamic_resharding = true
> #Increment to 128
> rgw_override_bucket_index_max_shards = 128
>
> #Default: 1000
> rgw list buckets max chunk = 5000
>
>
>
> [osd]
> osd mkfs type = xfs
> osd op threads = 12
> osd disk threads = 12
>
> osd recovery threads = 4
> osd recovery op priority = 1
> osd recovery max active = 2
> osd recovery max single start = 1
>
> osd max backfills = 4
> osd backfill scan max = 16
> osd backfill scan min = 4
> osd client op priority = 63
>
>
> osd_memory_target = 2147483648
>
> osd_scrub_begin_hour = 23
> osd_scrub_end_hour = 6
> osd_scrub_load_threshold = 0.25 #low load scrubbing osd_scrub_during_recovery 
> = false #scrub during recovery
>
> [mon]
> mon allow pool delete = true
> mon osd min down reporters = 3
>
> [mon.a]
> host = CEPH001
> public bind addr = 172.16.2.10
> mon addr = 172.16.2.10:6789
> mon allow pool delete = true
>
> [mon.b]
> host = CEPH002
> public bind addr = 172.16.2.11
> mon addr = 172.16.2.11:6789
> mon allow pool delete = true
>
> [mon.c]
> host = CEPH003
> public bind addr = 172.16.2.12
> mon addr = 172.16.2.12:6789
> mon allow pool delete = true
>
> [client.rgw]
>  rgw enable usage log = true
>
>
> [client.rgw.ceph-rgw01]
>  host = ceph-rgw01
>  rgw enable usage log = true
>  rgw dns name =
>  rgw frontends = "beast port=7480"
>  rgw resolve cname = false
>  rgw thread pool size = 512
>  rgw num rados handles = 1
>  rgw op thread timeout = 600
>
>
> [client.rgw.ceph-rgw03]
>  host = ceph-rgw03
>  rgw enable usage log = true
>  rgw dns name =
>  rgw frontends = "beast port=7480"
>  rgw resolve cname = false
>  rgw thread pool size = 512
>  rgw num rados handles = 1
>  rgw op thread timeout = 600
>
>
> On Thursday customer tell us that listing were instant, and now their 
> programs delay until timeout.
>
> Best Regards
>
> Manuel
>
> -Mensaje original-
> De: Matt Benjamin 
> Enviado el: viernes, 3 de mayo de 2019 14:00
> Para: EDH - Manuel Rios Fernandez 
> CC: ceph-users 
> Asunto: Re: [ceph-users] RGW Bucket unable to list buckets 100TB bucket
>
> Hi Folks,
>
> Thanks for sharing your ceph.conf along with the behavior.
>
> There are some odd things there.
>
> 1. rgw_num_rados_handles is deprecated--it should be 1 (the default), but 
> changing it may require you to check and retune

Re: [ceph-users] memory usage of: radosgw-admin bucket rm

2019-07-11 Thread Matt Benjamin
I don't think one has been created yet.  Eric Ivancich and Mark Kogan
of my team are investigating this behavior.

Matt

On Thu, Jul 11, 2019 at 10:40 AM Paul Emmerich  wrote:
>
> Is there already a tracker issue?
>
> I'm seeing the same problem here. Started deletion of a bucket with a few 
> hundred million objects a week ago or so and I've now noticed that it's also 
> leaking memory and probably going to crash.
> Going to investigate this further...
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
>
> On Tue, Jul 9, 2019 at 1:26 PM Matt Benjamin  wrote:
>>
>> Hi Harald,
>>
>> Please file a tracker issue, yes.  (Deletes do tend to be slower,
>> presumably due to rocksdb compaction.)
>>
>> Matt
>>
>> On Tue, Jul 9, 2019 at 7:12 AM Harald Staub  wrote:
>> >
>> > Currently removing a bucket with a lot of objects:
>> > radosgw-admin bucket rm --bucket=$BUCKET --bypass-gc --purge-objects
>> >
>> > This process was killed by the out-of-memory killer. Then looking at the
>> > graphs, we see a continuous increase of memory usage for this process,
>> > about +24 GB per day. Removal rate is about 3 M objects per day.
>> >
>> > It is not the fastest hardware, and this index pool is still without
>> > SSDs. The bucket is sharded, 1024 shards. We are on Nautilus 14.2.1, now
>> > about 500 OSDs.
>> >
>> > So with this bucket with 60 M objects, we would need about 480 GB of RAM
>> > to come through. Or is there a workaround? Should I open a tracker issue?
>> >
>> > The killed remove command can just be called again, but it will be
>> > killed again before it finishes. Also, it has to run some time until it
>> > continues to actually remove objects. This "wait time" is also
>> > increasing. Last time, after about 16 M objects already removed, the
>> > wait time was nearly 9 hours. Also during this time, there is a memory
>> > ramp, but not so steep.
>> >
>> > BTW it feels strange that the removal of objects is slower (about 3
>> > times) than adding objects.
>> >
>> >   Harry
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>>
>>
>> --
>>
>> Matt Benjamin
>> Red Hat, Inc.
>> 315 West Huron Street, Suite 140A
>> Ann Arbor, Michigan 48103
>>
>> http://www.redhat.com/en/technologies/storage
>>
>> tel.  734-821-5101
>> fax.  734-769-8938
>> cel.  734-216-5309
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] memory usage of: radosgw-admin bucket rm

2019-07-09 Thread Matt Benjamin
Hi Harald,

Please file a tracker issue, yes.  (Deletes do tend to be slower,
presumably due to rocksdb compaction.)

Matt

On Tue, Jul 9, 2019 at 7:12 AM Harald Staub  wrote:
>
> Currently removing a bucket with a lot of objects:
> radosgw-admin bucket rm --bucket=$BUCKET --bypass-gc --purge-objects
>
> This process was killed by the out-of-memory killer. Then looking at the
> graphs, we see a continuous increase of memory usage for this process,
> about +24 GB per day. Removal rate is about 3 M objects per day.
>
> It is not the fastest hardware, and this index pool is still without
> SSDs. The bucket is sharded, 1024 shards. We are on Nautilus 14.2.1, now
> about 500 OSDs.
>
> So with this bucket with 60 M objects, we would need about 480 GB of RAM
> to come through. Or is there a workaround? Should I open a tracker issue?
>
> The killed remove command can just be called again, but it will be
> killed again before it finishes. Also, it has to run some time until it
> continues to actually remove objects. This "wait time" is also
> increasing. Last time, after about 16 M objects already removed, the
> wait time was nearly 9 hours. Also during this time, there is a memory
> ramp, but not so steep.
>
> BTW it feels strange that the removal of objects is slower (about 3
> times) than adding objects.
>
>   Harry
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored?

2019-06-28 Thread Matt Benjamin
Hi Dominic,

The reason is likely that RGW doesn't yet support ListObjectsV2.

Support is nearly here though:  https://github.com/ceph/ceph/pull/28102

Matt


On Fri, Jun 28, 2019 at 12:43 PM  wrote:
>
> All;
>
> I've got a RADOSGW instance setup, backed by my demonstration Ceph cluster.  
> I'm using Amazon's S3 SDK, and I've run into an annoying little snag.
>
> My code looks like this:
> amazonS3 = builder.build();
>
> ListObjectsV2Request req = new 
> ListObjectsV2Request().withBucketName("WorkOrder").withMaxKeys(MAX_KEYS);
> ListObjectsV2Result result;
>
> do
> {
> result = amazonS3.listObjectsV2(req);
>
> for (S3ObjectSummary objectSummary : result.getObjectSummaries())
> {
> summaries.add(objectSummary);
> }
>
> String token = result.getNextContinuationToken();
> req.setContinuationToken(token);
> }
> while (result.isTruncated());
>
> The problem is, the ContinuationToken seems to be ignored, i.e. every call to 
> amazonS3.listObjectsV2(req) returns the same set, and the loop never ends 
> (until the summaries LinkedList overflows).
>
> Thoughts?
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
>
> _______
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>


--

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RadosGW ops log lag?

2019-04-17 Thread Matt Benjamin
It should not be best effort.  As written, exactly
rgw_usage_log_flush_threshold outstanding log entries will be
buffered.  The default value for this parameter is 1024, which is
probably not high for a sustained workload, but you could experiment
with reducing it.

Matt

On Fri, Apr 12, 2019 at 11:21 AM Aaron Bassett
 wrote:
>
> Ok thanks. Is the expectation that events will be available on that socket as 
> soon as the occur or is it more of a best effort situation? I'm just trying 
> to nail down which side of the socket might be lagging. It's pretty difficult 
> to recreate this as I have to hit the cluster very hard to get it to start 
> lagging.
>
> Thanks, Aaron
>
> > On Apr 12, 2019, at 11:16 AM, Matt Benjamin  wrote:
> >
> > Hi Aaron,
> >
> > I don't think that exists currently.
> >
> > Matt
> >
> > On Fri, Apr 12, 2019 at 11:12 AM Aaron Bassett
> >  wrote:
> >>
> >> I have an radogw log centralizer that we use to for an audit trail for 
> >> data access in our ceph clusters. We've enabled the ops log socket and 
> >> added logging of the http_authorization header to it:
> >>
> >> rgw log http headers = "http_authorization"
> >> rgw ops log socket path = /var/run/ceph/rgw-ops.sock
> >> rgw enable ops log = true
> >>
> >> We have a daemon that listens on the ops socket, extracts/manipulates some 
> >> information from the ops log, and sends it off to our log aggregator.
> >>
> >> This setup works pretty well for the most part, except when the cluster 
> >> comes under heavy load, it can get _very_ laggy - sometimes up to several 
> >> hours behind. I'm having a hard time nailing down whats causing this lag. 
> >> The daemon is rather naive, basically just some nc with jq in between, but 
> >> the log aggregator has plenty of spare capacity, so I don't think its 
> >> slowing down how fast the daemon is consuming from the socket.
> >>
> >> I was revisiting the documentation about this ops log and noticed the 
> >> following which I hadn't seen previously:
> >>
> >> When specifying a UNIX domain socket, it is also possible to specify the 
> >> maximum amount of memory that will be used to keep the data backlog:
> >> rgw ops log data backlog = 
> >> Any backlogged data in excess to the specified size will be lost, so the 
> >> socket needs to be read constantly.
> >>
> >> I'm wondering if theres a way I can query radosgw for the current size of 
> >> that backlog to help me narrow down where the bottleneck may be occuring.
> >>
> >> Thanks,
> >> Aaron
> >>
> >>
> >>
> >> CONFIDENTIALITY NOTICE
> >> This e-mail message and any attachments are only for the use of the 
> >> intended recipient and may contain information that is privileged, 
> >> confidential or exempt from disclosure under applicable law. If you are 
> >> not the intended recipient, any disclosure, distribution or other use of 
> >> this e-mail message or attachments is prohibited. If you have received 
> >> this e-mail message in error, please delete and notify the sender 
> >> immediately. Thank you.
> >>
> >> ___
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com=DwIFaQ=Tpa2GKmmYSmpYS4baANxQwQYqA0vwGXwkJOPBegaiTs=5nKer5huNDFQXjYpOR4o_7t5CRI8wb5Vb_v1pBywbYw=sIK_aBR3PrR2olfXOZWgvPVm7jIoZtvEk2YHofl4TDU=FzFoCJ8qtZ66OKdL1Ph10qjZbCEjvMg9JyS_9LwEpSg=
> >>
> >>
> >
> >
> > --
> >
> > Matt Benjamin
> > Red Hat, Inc.
> > 315 West Huron Street, Suite 140A
> > Ann Arbor, Michigan 48103
> >
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.redhat.com_en_technologies_storage=DwIFaQ=Tpa2GKmmYSmpYS4baANxQwQYqA0vwGXwkJOPBegaiTs=5nKer5huNDFQXjYpOR4o_7t5CRI8wb5Vb_v1pBywbYw=sIK_aBR3PrR2olfXOZWgvPVm7jIoZtvEk2YHofl4TDU=hi6_HiZS0D_nzAqKsvJPPfmi8nZSv4lZCRFZ1ru9CxM=
> >
> > tel.  734-821-5101
> > fax.  734-769-8938
> > cel.  734-216-5309
>
>


-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored?

2019-06-28 Thread Matt Benjamin
FYI, this PR just merged.  I would expect to see backports at least as
far as N, and others would be possible.

regards,

Matt

On Fri, Jun 28, 2019 at 3:43 PM  wrote:
>
> Matt;
>
> Yep, that would certainly explain it.
>
> My apologies, I almost searched for that information before sending the email.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International Inc.
> dhils...@performair.com
> www.PerformAir.com
>
>
>
> -Original Message-
> From: Matt Benjamin [mailto:mbenj...@redhat.com]
> Sent: Friday, June 28, 2019 9:48 AM
> To: Dominic Hilsbos
> Cc: ceph-users
> Subject: Re: [ceph-users] RADOSGW S3 - Continuation Token Ignored?
>
> Hi Dominic,
>
> The reason is likely that RGW doesn't yet support ListObjectsV2.
>
> Support is nearly here though:  https://github.com/ceph/ceph/pull/28102
>
> Matt
>
>
> On Fri, Jun 28, 2019 at 12:43 PM  wrote:
> >
> > All;
> >
> > I've got a RADOSGW instance setup, backed by my demonstration Ceph cluster. 
> >  I'm using Amazon's S3 SDK, and I've run into an annoying little snag.
> >
> > My code looks like this:
> > amazonS3 = builder.build();
> >
> > ListObjectsV2Request req = new 
> > ListObjectsV2Request().withBucketName("WorkOrder").withMaxKeys(MAX_KEYS);
> > ListObjectsV2Result result;
> >
> > do
> > {
> > result = amazonS3.listObjectsV2(req);
> >
> > for (S3ObjectSummary objectSummary : result.getObjectSummaries())
> > {
> > summaries.add(objectSummary);
> > }
> >
> > String token = result.getNextContinuationToken();
> > req.setContinuationToken(token);
> > }
> > while (result.isTruncated());
> >
> > The problem is, the ContinuationToken seems to be ignored, i.e. every call 
> > to amazonS3.listObjectsV2(req) returns the same set, and the loop never 
> > ends (until the summaries LinkedList overflows).
> >
> > Thoughts?
> >
> > Thank you,
> >
> > Dominic L. Hilsbos, MBA
> > Director - Information Technology
> > Perform Air International Inc.
> > dhils...@performair.com
> > www.PerformAir.com
> >
> >
> >
> > _______
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
>
>
> --
>
> Matt Benjamin
> Red Hat, Inc.
> 315 West Huron Street, Suite 140A
> Ann Arbor, Michigan 48103
>
> http://www.redhat.com/en/technologies/storage
>
> tel.  734-821-5101
> fax.  734-769-8938
> cel.  734-216-5309



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] NFS

2019-10-03 Thread Matt Benjamin
RGW NFS can support any NFS style of authentication, but users will
have the RGW access of their nfs-ganesha export.  You can create
exports with disjoint privileges, and since recent L, N, RGW tenants.

Matt

On Tue, Oct 1, 2019 at 8:31 AM Marc Roos  wrote:
>
>  I think you can run into problems
> with a multi user environment of RGW and nfs-ganesha.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] NFS

2019-10-03 Thread Matt Benjamin
Hi Mark,

Here's an example that should work--userx and usery are RGW users
created in different tenants, like so:

radosgw-admin --tenant tnt1 --uid userx --display-name "tnt1-userx" \
 --access_key "userxacc" --secret "test123" user create

 radosgw-admin --tenant tnt2 --uid usery --display-name "tnt2-usery" \
 --access_key "useryacc" --secret "test456" user create

Remember that to make use of this feature, you need recent librgw and
matching nfs-ganesha.  In particular, Ceph should have, among other
changes:

commit 65d0ae733defe277f31825364ee52d5102c06ab9
Author: Matt Benjamin 
Date:   Wed Jun 5 07:25:35 2019 -0400

rgw_file: include tenant in hashes of object

Because bucket names are taken as object names in the top
of an export.  Make hashing by tenant general to avoid disjoint
hashing of bucket.

Fixes: http://tracker.ceph.com/issues/40118

Signed-off-by: Matt Benjamin 
(cherry picked from commit 8e0fd5fbfa7c770f6b668e79b772179946027bce)

commit 459b6b2b224953655fd0360e8098ae598e41d3b2
Author: Matt Benjamin 
Date:   Wed May 15 15:53:32 2019 -0400

rgw_file: include tenant when hashing bucket names

Prevent identical paths from distinct tenants from colliding in
RGW NFS handle cache.

Fixes: http://tracker.ceph.com/issues/40118

Signed-off-by: Matt Benjamin 
(cherry picked from commit b800a9de83dff23a150ed7d236cb61c8b7d971ae)
Signed-off-by: Matt Benjamin 


ganesha.conf.deuxtenant:


EXPORT
{
# Export Id (mandatory, each EXPORT must have a unique Export_Id)
Export_Id = 77;

# Exported path (mandatory)
Path = "/";

# Pseudo Path (required for NFS v4)
Pseudo = "/userx";

# Required for access (default is None)
# Could use CLIENT blocks instead
Access_Type = RW;

SecType = "sys";

Protocols = 3,4;
Transports = UDP,TCP;

#Delegations = Readwrite;

Squash = No_Root_Squash;

# Exporting FSAL
FSAL {
Name = RGW;
User_Id = "userx";
Access_Key_Id = "userxacc";
Secret_Access_Key = "test123";
}
}

EXPORT
{
# Export Id (mandatory, each EXPORT must have a unique Export_Id)
Export_Id = 78;

# Exported path (mandatory)
Path = "/";

# Pseudo Path (required for NFS v4)
Pseudo = "/usery";

# Required for access (default is None)
# Could use CLIENT blocks instead
Access_Type = RW;

SecType = "sys";

Protocols = 3,4;
Transports = UDP,TCP;

#Delegations = Readwrite;

Squash = No_Root_Squash;

# Exporting FSAL
FSAL {
Name = RGW;
User_Id = "usery";
Access_Key_Id = "useryacc";
Secret_Access_Key = "test456";
}
}

#mount at bucket case
EXPORT
{
# Export Id (mandatory, each EXPORT must have a unique Export_Id)
Export_Id = 79;

# Exported path (mandatory)
Path = "/buck5";

# Pseudo Path (required for NFS v4)
Pseudo = "/usery_buck5";

# Required for access (default is None)
# Could use CLIENT blocks instead
Access_Type = RW;

SecType = "sys";

Protocols = 3,4;
Transports = UDP,TCP;

#Delegations = Readwrite;

Squash = No_Root_Squash;

# Exporting FSAL
FSAL {
Name = RGW;
User_Id = "usery";
Access_Key_Id = "useryacc";
Secret_Access_Key = "test456";
}
}



RGW {
ceph_conf = "/home/mbenjamin/ceph-noob/build/ceph.conf";
#init_args = "-d --debug-rgw=16";
init_args = "";
}

NFS_Core_Param {
Nb_Worker = 17;
mount_path_pseudo = true;
}

CacheInode {
Chunks_HWMark = 7;
Entries_Hwmark = 200;
}

NFSV4 {
Graceless = true;
Allow_Numeric_Owners = true;
Only_Numeric_Owners = true;
}

LOG {
Components {
#NFS_READDIR = FULL_DEBUG;
#NFS4 = FULL_DEBUG;
#CACHE_INODE = FULL_DEBUG;
#FSAL = FULL_DEBUG;
}
Facility {
name = FILE;
destination = "/tmp/ganesha-rgw.log";
enable = active;
}
}

On Thu, Oct 3, 2019 at 10:34 AM Marc Roos  wrote:
>
>
> How should a multi tenant RGW config look like, I am not able get this
> working:
>
> EXPORT {
>Export_ID=301;
>Path = "test:test3";
>#Path = "/";
>Pseudo = "/rgwtester";
>
>Protocols = 4;
>FSAL {
>Name = RGW;
>User_Id = "test$tester1";
>Access_Key_Id = "TESTER";
>Secret_Access_Key = "xxx";
>}
>Disable_ACL = TRUE;
>CLIENT { Clients = 192.168.10.0/24; access_type = "RO"; }
> }
>
>
>