Yes, technically the pool could be used for multiple things, but that is
not a common practice and I don't think it is a scenario tested by the dev
team with each release. I would about it unless you have a very compelling
argument to do something so differently.

On Tue, Jul 25, 2017, 6:13 AM <[email protected]> wrote:

> Understood.
>
> Would you recommend to have a dedicated pool for the data that is directly
> written using librados and another pool for the filesystem (CephFS)?
>
>
>
>
> 24. Juli 2017 19:46, "David Turner" <[email protected]
> <%22david%20turner%22%20%[email protected]%3E>> schrieb:
>
> You might be able to read these objects using s3fs if you're using a
> RadosGW. But like John mentioned, you cannot write them as objects into the
> pool and read them as files from the filesystem.
> On Mon, Jul 24, 2017, 12:07 PM John Spray <[email protected]> wrote:
>
> On Mon, Jul 24, 2017 at 4:52 PM, <[email protected]> wrote:
> > Hello!
> >
> > I created CephFS according to documentation:
> > $ ceph osd pool create hdb-backup <pg_num>
> > $ ceph osd pool create hdb-backup_metadata <pg_num>
> > $ ceph fs new <fs_name> <metadata> <data>
> >
> > I can mount this pool with user admin:
> > ld4257:/etc/ceph # mount -t ceph 10.96.5.37,10.96.5.38,10.96.5.38:/
> /mnt/cephfs -o name=admin,secretfile=/etc/ceph/ceph.client.admin.key
>
> Need to untangle the terminology a bit.
>
> What you're mounting is a filesystem, the filesystem is storing it's
> data in pools. Pools are a lower-level concept than filesystems.
>
> > ld4257:/etc/ceph # mount | grep ceph
> > 10.96.5.37,10.96.5.38,10.96.5.38:/ on /mnt/cephfs type ceph
> (rw,relatime,name=admin,secret=<hidden>,acl)
> >
> > To verify which pool is mounted, I checked this:
> > ld4257:/etc/ceph # ceph osd lspools
> > 0 rbd,1 templates,3 hdb-backup,4 hdb-backup_metadata,
> >
> > ld4257:/etc/ceph # cephfs /mnt/cephfs/ show_layout
> > WARNING: This tool is deprecated. Use the layout.* xattrs to query and
> modify layouts.
> > layout.data_pool: 3
> > layout.object_size: 4194304
> > layout.stripe_unit: 4194304
> > layout.stripe_count: 1
> >
> > So, I guess the correct pool "hdb-backup" is now mounted to /mnt/cephfs.
> >
> > Then I pushed some files in this pool.
>
> I think you mean that you put some objects into your pool. So at this
> stage you have not created any files, cephfs doesn't know anything
> about these objects. You would need to really create files (i.e.
> write to your mount) to have files that exist in cephfs.
>
> > I can display the relevant objects now:
> > ld4257:/etc/ceph # rados -p hdb-backup ls
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:7269
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:6357
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:772
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:14039
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:1803
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:5549
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:15797
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:20624
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:7322
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:5208
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:17479
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:14361
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:16963
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:4694
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:1391
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:1199
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:11359
> > MTY:file:8669fdbb88fda698afbac6374d826cba133a8d11:11995
> > [...]
> >
> > (This is just an extract, there are many more object.)
> >
> > Now, the question is:
> > Can I display these files with CephFS?
>
> Unfortunately not -- you would need to write your data in as files
> (via a cephfs mount) to read it back as files.
>
> John
>
> >
> > When I check the content of /mnt/cephfs, there's only one directory
> "MTY" that I have created; this directory is not related to the output of
> rados at all:
> > ld4257:/etc/ceph # ll /mnt/cephfs/
> > total 0
> > drwxr-xr-x 1 root root 0 Jul 24 15:57 MTY
> >
> > THX
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to