Thanks to everyone for the helpful feedback. I appreciate the responsiveness.

-Jon


-----Original Message-----
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Matt 
Benjamin
Sent: Wednesday, June 28, 2017 4:20 PM
To: Gregory Farnum <gfar...@redhat.com>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Mapping data and metadata between rados and cephfs

Hi,

That's true, sure.  We hope to support async mounts and more normal workflows 
in future, but those are important caveats.  Editing objects in place doesn't 
work with RGW NFS.

Matt

----- Original Message -----
> From: "Gregory Farnum" <gfar...@redhat.com>
> To: "Matt Benjamin" <mbenja...@redhat.com>, "David Turner" 
> <drakonst...@gmail.com>
> Cc: ceph-users@lists.ceph.com
> Sent: Wednesday, June 28, 2017 4:14:39 PM
> Subject: Re: [ceph-users] Mapping data and metadata between rados and 
> cephfs
> 
> On Wed, Jun 28, 2017 at 2:10 PM Matt Benjamin <mbenja...@redhat.com> wrote:
> 
> > Hi,
> >
> > A supported way to access S3 objects from a filesystem mount is with 
> > RGW NFS.  That is, RGW now exports the S3 namespace directly as 
> > files and directories, one consumer is an nfs-ganesha NFS driver.
> >
> 
> This supports a very specific subset of use cases/fs operations 
> though, right? You can use it if you're just doing bulk file shuffling 
> but it's not a way to upload via S3 and then perform filesystem 
> update-in-place operations in any reasonable fashion (which is what I 
> think was described in the original query).
> -Greg
> 
> 
> >
> > Regards,
> >
> > Matt
> >
> > ----- Original Message -----
> > > From: "David Turner" <drakonst...@gmail.com>
> > > To: "Jonathan Lefman" <jonathan.lef...@intel.com>,
> > ceph-users@lists.ceph.com
> > > Sent: Wednesday, June 28, 2017 2:59:12 PM
> > > Subject: Re: [ceph-users] Mapping data and metadata between rados 
> > > and
> > cephfs
> > >
> > > CephFS is very different from RGW. You may be able to utilize 
> > > s3fs-fuse
> > to
> > > interface with RGW, but I haven't heard of anyone using that on 
> > > the ML before.
> > >
> > > On Wed, Jun 28, 2017 at 2:57 PM Lefman, Jonathan <
> > jonathan.lef...@intel.com
> > > > wrote:
> > >
> > >
> > >
> > >
> > >
> > > Thanks for the prompt reply. I was hoping that there would be an 
> > > s3fs ( https://github.com/s3fs-fuse/s3fs-fuse ) equivalent for 
> > > Ceph since
> > there are
> > > numerous functional similarities. Ideally one would be able to 
> > > upload
> > data
> > > to a bucket and have the file synced to the local filesystem mount 
> > > of
> > that
> > > bucket. This is similar to the idea of uploading data through 
> > > RadosGW and have the data be available in CephFS.
> > >
> > >
> > >
> > > -Jon
> > >
> > >
> > >
> > > From: David Turner [mailto: drakonst...@gmail.com ]
> > > Sent: Wednesday, June 28, 2017 2:51 PM
> > >
> > >
> > >
> > > To: Lefman, Jonathan < jonathan.lef...@intel.com >;
> > ceph-users@lists.ceph.com
> > > Subject: Re: [ceph-users] Mapping data and metadata between rados 
> > > and
> > cephfs
> > >
> > >
> > >
> > >
> > >
> > > CephFS and RGW store data differently. I have never heard of, nor 
> > > do I believe that it's possible, to have CephFS and RGW sharing 
> > > the same data pool.
> > >
> > >
> > >
> > >
> > >
> > > On Wed, Jun 28, 2017 at 2:48 PM Lefman, Jonathan <
> > jonathan.lef...@intel.com
> > > > wrote:
> > >
> > >
> > >
> > >
> > >
> > > Yes, sorry. I meant the RadosGW. I still do not know what the 
> > > mechanism
> > is to
> > > enable the mapping between data inserted by the rados component 
> > > and the cephfs component. I hope that makes sense.
> > >
> > >
> > >
> > > -Jon
> > >
> > >
> > >
> > > From: David Turner [mailto: drakonst...@gmail.com ]
> > > Sent: Wednesday, June 28, 2017 2:46 PM
> > > To: Lefman, Jonathan < jonathan.lef...@intel.com >;
> > ceph-users@lists.ceph.com
> > > Subject: Re: [ceph-users] Mapping data and metadata between rados 
> > > and
> > cephfs
> > >
> > >
> > >
> > >
> > >
> > > You want to access the same data via a rados API and via cephfs? 
> > > Are you thinking RadosGW?
> > >
> > >
> > >
> > >
> > >
> > > On Wed, Jun 28, 2017 at 1:54 PM Lefman, Jonathan <
> > jonathan.lef...@intel.com
> > > > wrote:
> > >
> > >
> > >
> > >
> > >
> > > Hi all,
> > >
> > >
> > >
> > > I would like to create a 1-to-1 mapping between rados and cephfs. 
> > > Here's
> > the
> > > usage scenario:
> > >
> > >
> > >
> > > 1. Upload file via rest api through rados compatible APIs
> > >
> > > 2. Run "local" operations on the file delivered via rados on the 
> > > linked cephfs mount
> > >
> > > 3. Retrieve/download file via rados API on newly created data 
> > > available
> > on
> > > the cephfs mount
> > >
> > >
> > >
> > > I would like to know whether this is possible out-of-the-box; this 
> > > will
> > never
> > > work; or this may work with a bit of effort. If this is possible, 
> > > can
> > this
> > > be achieved in a scalable manner to accommodate multiple (10s to 
> > > 100s)
> > users
> > > on the same system?
> > >
> > >
> > >
> > > I asked this question in #ceph and #ceph-devel. So far, there have 
> > > not
> > been
> > > replies with a way to accomplish this. Thank you.
> > >
> > >
> > >
> > > -Jon
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > >
> >
> > --
> > Matt Benjamin
> > Red Hat, Inc.
> > 315 West Huron Street, Suite 140A
> > Ann Arbor, Michigan 48103
> >
> > http://www.redhat.com/en/technologies/storage
> >
> > tel.  734-821-5101 <(734)%20821-5101> fax.  734-769-8938 
> > <(734)%20769-8938> cel.  734-216-5309 <(734)%20216-5309> 
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> 

--
Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to