Re: [ceph-users] cephfs filesystem layouts : authentication gotchas ?

2015-03-04 Thread Gregory Farnum
Just to get more specific: the reason you can apparently write stuff
to a file when you can't write to the pool it's stored in is because
the file data is initially stored in cache. The flush out to RADOS,
when it happens, will fail.

It would definitely be preferable if there was some way to immediately
return a permission or IO error in this case, but so far we haven't
found one; the relevant interfaces just aren't present and it's
unclear how to propagate the data back to users in a way that makes
sense even if they were. :/
-Greg

On Wed, Mar 4, 2015 at 3:37 AM, SCHAER Frederic frederic.sch...@cea.fr wrote:
 Hi,

 Many thanks for the explanations.
 I haven't used the nodcache option when mounting cephfs, it actually got 
 there by default

 My mount command is/was :
 # mount -t ceph 1.2.3.4:6789:/ /mnt -o name=puppet,secretfile=./puppet.secret

 I don't know what causes this option to be default, maybe it's the kernel 
 module I compiled from git (because there is no kmod-ceph or kmod-rbd in any 
 RHEL-like distributions except RHEV), I'll try to update/check ...

 Concerning the rados pool ls, indeed : I created empty files in the pool, and 
 they were not showing up probably because they were just empty - but when I 
 create a non empty file, I see things in rados ls...

 Thanks again
 Frederic


 -Message d'origine-
 De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de John 
 Spray
 Envoyé : mardi 3 mars 2015 17:15
 À : ceph-users@lists.ceph.com
 Objet : Re: [ceph-users] cephfs filesystem layouts : authentication gotchas ?



 On 03/03/2015 15:21, SCHAER Frederic wrote:

 By the way : looks like the ceph fs ls command is inconsistent when
 the cephfs is mounted (I used a locally compiled kmod-ceph rpm):

 [root@ceph0 ~]# ceph fs ls

 name: cephfs_puppet, metadata pool: puppet_metadata, data pools: [puppet ]

 (umount /mnt .)

 [root@ceph0 ~]# ceph fs ls

 name: cephfs_puppet, metadata pool: puppet_metadata, data pools:
 [puppet root ]

 This is probably #10288, which was fixed in 0.87.1

 So, I have this pool named root that I added in the cephfs filesystem.

 I then edited the filesystem xattrs :

 [root@ceph0 ~]# getfattr -n ceph.dir.layout /mnt/root

 getfattr: Removing leading '/' from absolute path names

 # file: mnt/root

 ceph.dir.layout=stripe_unit=4194304 stripe_count=1
 object_size=4194304 pool=root

 I'm therefore assuming client.puppet should not be allowed to write or
 read anything in /mnt/root, which belongs to the root pool. but that
 is not the case.

 On another machine where I mounted cephfs using the client.puppet key,
 I can do this :

 The mount was done with the client.puppet key, not the admin one that
 is not deployed on that node :

 1.2.3.4:6789:/ on /mnt type ceph
 (rw,relatime,name=puppet,secret=hidden,nodcache)

 [root@dev7248 ~]# echo not allowed  /mnt/root/secret.notfailed

 [root@dev7248 ~]#

 [root@dev7248 ~]# cat /mnt/root/secret.notfailed

 not allowed

 This is data you're seeing from the page cache, it hasn't been written
 to RADOS.

 You have used the nodcache setting, but that doesn't mean what you
 think it does (it was about caching dentries, not data).  It's actually
 not even used in recent kernels (http://tracker.ceph.com/issues/11009).

 You could try the nofsc option, but I don't know exactly how much
 caching that turns off -- the safer approach here is probably to do your
 testing using I/Os that have O_DIRECT set.

 And I can even see the xattrs inherited from the parent dir :

 [root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed

 getfattr: Removing leading '/' from absolute path names

 # file: mnt/root/secret.notfailed

 ceph.file.layout=stripe_unit=4194304 stripe_count=1
 object_size=4194304 pool=root

 Whereas on the node where I mounted cephfs as ceph admin, I get nothing :

 [root@ceph0 ~]# cat /mnt/root/secret.notfailed

 [root@ceph0 ~]# ls -l /mnt/root/secret.notfailed

 -rw-r--r-- 1 root root 12 Mar  3 15:27 /mnt/root/secret.notfailed

 After some time, the file also gets empty on the puppet client host :

 [root@dev7248 ~]# cat /mnt/root/secret.notfailed

 [root@dev7248 ~]#

 (but the metadata remained ?)

 Right -- eventually the cache goes away, and you see the true (empty)
 state of the file.

 Also, as an unpriviledged user, I can get ownership of a secret file
 by changing the extended attribute :

 [root@dev7248 ~]# setfattr -n ceph.file.layout.pool -v puppet
 /mnt/root/secret.notfailed

 [root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed

 getfattr: Removing leading '/' from absolute path names

 # file: mnt/root/secret.notfailed

 ceph.file.layout=stripe_unit=4194304 stripe_count=1
 object_size=4194304 pool=puppet

 Well, you're not really getting ownership of anything here: you're
 modifying the file's metadata, which you are entitled to do (pool
 permissions have nothing to do with file metadata).  There was a recent
 bug where a file's pool layout could

Re: [ceph-users] cephfs filesystem layouts : authentication gotchas ?

2015-03-04 Thread SCHAER Frederic
Hi,

Many thanks for the explanations.
I haven't used the nodcache option when mounting cephfs, it actually got 
there by default 

My mount command is/was :
# mount -t ceph 1.2.3.4:6789:/ /mnt -o name=puppet,secretfile=./puppet.secret

I don't know what causes this option to be default, maybe it's the kernel 
module I compiled from git (because there is no kmod-ceph or kmod-rbd in any 
RHEL-like distributions except RHEV), I'll try to update/check ...

Concerning the rados pool ls, indeed : I created empty files in the pool, and 
they were not showing up probably because they were just empty - but when I 
create a non empty file, I see things in rados ls...

Thanks again
Frederic


-Message d'origine-
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de John 
Spray
Envoyé : mardi 3 mars 2015 17:15
À : ceph-users@lists.ceph.com
Objet : Re: [ceph-users] cephfs filesystem layouts : authentication gotchas ?



On 03/03/2015 15:21, SCHAER Frederic wrote:

 By the way : looks like the ceph fs ls command is inconsistent when 
 the cephfs is mounted (I used a locally compiled kmod-ceph rpm):

 [root@ceph0 ~]# ceph fs ls

 name: cephfs_puppet, metadata pool: puppet_metadata, data pools: [puppet ]

 (umount /mnt .)

 [root@ceph0 ~]# ceph fs ls

 name: cephfs_puppet, metadata pool: puppet_metadata, data pools: 
 [puppet root ]

This is probably #10288, which was fixed in 0.87.1

 So, I have this pool named root that I added in the cephfs filesystem.

 I then edited the filesystem xattrs :

 [root@ceph0 ~]# getfattr -n ceph.dir.layout /mnt/root

 getfattr: Removing leading '/' from absolute path names

 # file: mnt/root

 ceph.dir.layout=stripe_unit=4194304 stripe_count=1 
 object_size=4194304 pool=root

 I'm therefore assuming client.puppet should not be allowed to write or 
 read anything in /mnt/root, which belongs to the root pool. but that 
 is not the case.

 On another machine where I mounted cephfs using the client.puppet key, 
 I can do this :

 The mount was done with the client.puppet key, not the admin one that 
 is not deployed on that node :

 1.2.3.4:6789:/ on /mnt type ceph 
 (rw,relatime,name=puppet,secret=hidden,nodcache)

 [root@dev7248 ~]# echo not allowed  /mnt/root/secret.notfailed

 [root@dev7248 ~]#

 [root@dev7248 ~]# cat /mnt/root/secret.notfailed

 not allowed

This is data you're seeing from the page cache, it hasn't been written 
to RADOS.

You have used the nodcache setting, but that doesn't mean what you 
think it does (it was about caching dentries, not data).  It's actually 
not even used in recent kernels (http://tracker.ceph.com/issues/11009).

You could try the nofsc option, but I don't know exactly how much 
caching that turns off -- the safer approach here is probably to do your 
testing using I/Os that have O_DIRECT set.

 And I can even see the xattrs inherited from the parent dir :

 [root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed

 getfattr: Removing leading '/' from absolute path names

 # file: mnt/root/secret.notfailed

 ceph.file.layout=stripe_unit=4194304 stripe_count=1 
 object_size=4194304 pool=root

 Whereas on the node where I mounted cephfs as ceph admin, I get nothing :

 [root@ceph0 ~]# cat /mnt/root/secret.notfailed

 [root@ceph0 ~]# ls -l /mnt/root/secret.notfailed

 -rw-r--r-- 1 root root 12 Mar  3 15:27 /mnt/root/secret.notfailed

 After some time, the file also gets empty on the puppet client host :

 [root@dev7248 ~]# cat /mnt/root/secret.notfailed

 [root@dev7248 ~]#

 (but the metadata remained ?)

Right -- eventually the cache goes away, and you see the true (empty) 
state of the file.

 Also, as an unpriviledged user, I can get ownership of a secret file 
 by changing the extended attribute :

 [root@dev7248 ~]# setfattr -n ceph.file.layout.pool -v puppet 
 /mnt/root/secret.notfailed

 [root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed

 getfattr: Removing leading '/' from absolute path names

 # file: mnt/root/secret.notfailed

 ceph.file.layout=stripe_unit=4194304 stripe_count=1 
 object_size=4194304 pool=puppet

Well, you're not really getting ownership of anything here: you're 
modifying the file's metadata, which you are entitled to do (pool 
permissions have nothing to do with file metadata).  There was a recent 
bug where a file's pool layout could be changed even if it had data, but 
that was about safety rather than permissions.

 Final question for those that read down here : it appears that before 
 creating the cephfs filesystem, I used the puppet pool to store a 
 test rbd instance.

 And it appears I cannot get the list of cephfs objects in that pool, 
 whereas I can get those that are on the newly created root pool :

 [root@ceph0 ~]# rados -p puppet ls

 test.rbd

 rbd_directory

 [root@ceph0 ~]# rados -p root ls

 10a.

 10b.

 Bug, or feature ?


I didn't see anything in your earlier steps that would have led to any 
objects

[ceph-users] cephfs filesystem layouts : authentication gotchas ?

2015-03-03 Thread SCHAER Frederic
Hi,

I am attempting to test the cephfs filesystem layouts.
I created a user with rights to write only in one pool :

client.puppet
key:zzz
caps: [mon] allow r
caps: [osd] allow rwx pool=puppet

I also created another pool in which I would assume this user is allowed to do 
nothing after I successfully configure things.
By the way : looks like the ceph fs ls command is inconsistent when the 
cephfs is mounted (I used a locally compiled kmod-ceph rpm):

[root@ceph0 ~]# ceph fs ls
name: cephfs_puppet, metadata pool: puppet_metadata, data pools: [puppet ]
(umount /mnt ...)
[root@ceph0 ~]# ceph fs ls
name: cephfs_puppet, metadata pool: puppet_metadata, data pools: [puppet root ]

So, I have this pool named root that I added in the cephfs filesystem.
I then edited the filesystem xattrs :

[root@ceph0 ~]# getfattr -n ceph.dir.layout /mnt/root
getfattr: Removing leading '/' from absolute path names
# file: mnt/root
ceph.dir.layout=stripe_unit=4194304 stripe_count=1 object_size=4194304 
pool=root

I'm therefore assuming client.puppet should not be allowed to write or read 
anything in /mnt/root, which belongs to the root pool... but that is not the 
case.
On another machine where I mounted cephfs using the client.puppet key, I can do 
this :

The mount was done with the client.puppet key, not the admin one that is not 
deployed on that node :
1.2.3.4:6789:/ on /mnt type ceph 
(rw,relatime,name=puppet,secret=hidden,nodcache)

[root@dev7248 ~]# echo not allowed  /mnt/root/secret.notfailed
[root@dev7248 ~]#
[root@dev7248 ~]# cat /mnt/root/secret.notfailed
not allowed

And I can even see the xattrs inherited from the parent dir :
[root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed
getfattr: Removing leading '/' from absolute path names
# file: mnt/root/secret.notfailed
ceph.file.layout=stripe_unit=4194304 stripe_count=1 object_size=4194304 
pool=root

Whereas on the node where I mounted cephfs as ceph admin, I get nothing :
[root@ceph0 ~]# cat /mnt/root/secret.notfailed
[root@ceph0 ~]# ls -l /mnt/root/secret.notfailed
-rw-r--r-- 1 root root 12 Mar  3 15:27 /mnt/root/secret.notfailed

After some time, the file also gets empty on the puppet client host :
[root@dev7248 ~]# cat /mnt/root/secret.notfailed
[root@dev7248 ~]#
(but the metadata remained ?)

Also, as an unpriviledged user, I can get ownership of a secret file by 
changing the extended attribute :

[root@dev7248 ~]# setfattr -n ceph.file.layout.pool -v puppet 
/mnt/root/secret.notfailed
[root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed
getfattr: Removing leading '/' from absolute path names
# file: mnt/root/secret.notfailed
ceph.file.layout=stripe_unit=4194304 stripe_count=1 object_size=4194304 
pool=puppet

But fortunately, I haven't succeeded yet (?) in reading that file...
My question therefore is : what am I doing wrong ?

Final question for those that read down here : it appears that before creating 
the cephfs filesystem, I used the puppet pool to store a test rbd instance.
And it appears I cannot get the list of cephfs objects in that pool, whereas I 
can get those that are on the newly created root pool :

[root@ceph0 ~]# rados -p puppet ls
test.rbd
rbd_directory
[root@ceph0 ~]# rados -p root ls
10a.
10b.

Bug, or feature ?

Thanks  regards


P.S : ceph release :

[root@dev7248 ~]# rpm -qa '*ceph*'
kmod-libceph-3.10.0-0.1.20150130gitee04310.el7.centos.x86_64
libcephfs1-0.87-0.el7.centos.x86_64
ceph-common-0.87-0.el7.centos.x86_64
ceph-0.87-0.el7.centos.x86_64
kmod-ceph-3.10.0-0.1.20150130gitee04310.el7.centos.x86_64
ceph-fuse-0.87.1-0.el7.centos.x86_64
python-ceph-0.87-0.el7.centos.x86_64
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs filesystem layouts : authentication gotchas ?

2015-03-03 Thread John Spray



On 03/03/2015 15:21, SCHAER Frederic wrote:


By the way : looks like the “ceph fs ls” command is inconsistent when 
the cephfs is mounted (I used a locally compiled kmod-ceph rpm):


[root@ceph0 ~]# ceph fs ls

name: cephfs_puppet, metadata pool: puppet_metadata, data pools: [puppet ]

(umount /mnt …)

[root@ceph0 ~]# ceph fs ls

name: cephfs_puppet, metadata pool: puppet_metadata, data pools: 
[puppet root ]



This is probably #10288, which was fixed in 0.87.1


So, I have this pool named “root” that I added in the cephfs filesystem.

I then edited the filesystem xattrs :

[root@ceph0 ~]# getfattr -n ceph.dir.layout /mnt/root

getfattr: Removing leading '/' from absolute path names

# file: mnt/root

ceph.dir.layout=stripe_unit=4194304 stripe_count=1 
object_size=4194304 pool=root


I’m therefore assuming client.puppet should not be allowed to write or 
read anything in /mnt/root, which belongs to the “root” pool… but that 
is not the case.


On another machine where I mounted cephfs using the client.puppet key, 
I can do this :


The mount was done with the client.puppet key, not the admin one that 
is not deployed on that node :


1.2.3.4:6789:/ on /mnt type ceph 
(rw,relatime,name=puppet,secret=hidden,nodcache)


[root@dev7248 ~]# echo not allowed  /mnt/root/secret.notfailed

[root@dev7248 ~]#

[root@dev7248 ~]# cat /mnt/root/secret.notfailed

not allowed

This is data you're seeing from the page cache, it hasn't been written 
to RADOS.


You have used the nodcache setting, but that doesn't mean what you 
think it does (it was about caching dentries, not data).  It's actually 
not even used in recent kernels (http://tracker.ceph.com/issues/11009).


You could try the nofsc option, but I don't know exactly how much 
caching that turns off -- the safer approach here is probably to do your 
testing using I/Os that have O_DIRECT set.



And I can even see the xattrs inherited from the parent dir :

[root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed

getfattr: Removing leading '/' from absolute path names

# file: mnt/root/secret.notfailed

ceph.file.layout=stripe_unit=4194304 stripe_count=1 
object_size=4194304 pool=root


Whereas on the node where I mounted cephfs as ceph admin, I get nothing :

[root@ceph0 ~]# cat /mnt/root/secret.notfailed

[root@ceph0 ~]# ls -l /mnt/root/secret.notfailed

-rw-r--r-- 1 root root 12 Mar  3 15:27 /mnt/root/secret.notfailed

After some time, the file also gets empty on the “puppet client” host :

[root@dev7248 ~]# cat /mnt/root/secret.notfailed

[root@dev7248 ~]#

(but the metadata remained ?)

Right -- eventually the cache goes away, and you see the true (empty) 
state of the file.


Also, as an unpriviledged user, I can get ownership of a “secret” file 
by changing the extended attribute :


[root@dev7248 ~]# setfattr -n ceph.file.layout.pool -v puppet 
/mnt/root/secret.notfailed


[root@dev7248 ~]# getfattr -n ceph.file.layout /mnt/root/secret.notfailed

getfattr: Removing leading '/' from absolute path names

# file: mnt/root/secret.notfailed

ceph.file.layout=stripe_unit=4194304 stripe_count=1 
object_size=4194304 pool=puppet


Well, you're not really getting ownership of anything here: you're 
modifying the file's metadata, which you are entitled to do (pool 
permissions have nothing to do with file metadata).  There was a recent 
bug where a file's pool layout could be changed even if it had data, but 
that was about safety rather than permissions.


Final question for those that read down here : it appears that before 
creating the cephfs filesystem, I used the “puppet” pool to store a 
test rbd instance.


And it appears I cannot get the list of cephfs objects in that pool, 
whereas I can get those that are on the newly created “root” pool :


[root@ceph0 ~]# rados -p puppet ls

test.rbd

rbd_directory

[root@ceph0 ~]# rados -p root ls

10a.

10b.

Bug, or feature ?



I didn't see anything in your earlier steps that would have led to any 
objects in the puppet pool.


To get closer to the effect you're looking for, you probably need to 
combine your pool settings with some permissions on the folders, and do 
your I/O as a user other than root -- your user-level permissions would 
protect your metadata, and your pool permissions would protect your data.


There are also plans to make finer grained access control for the 
metadata, but that's not there yet.


Cheers,
John

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com