Hi All,
Victory! Found the issue, it was a mistake on my part, however it does raise
another questions...
The issue was:
root@osh1:~# ceph --cluster apics auth list
installed auth entries:
<SNIP>
client.cuckoo
key: AQBjTblS4AFAARAAZyumzFyk2JS8d9AjutRoTQ==
caps: [mon] allow r
caps: [osd] allow rwx pool=staging, allow rwx pool=media2
When I recreated the pool I changed from 'media2' to 'media3' - so there wasn't
any authorization to the pool. I've corrected this (see below), then REMOUNTED
the filesystem on the client (it didn't work until I'd done this)
root@osh1:~# ceph --cluster apics auth caps client.cuckoo osd 'allow rwx
pool=media3, allow rwx pool=staging' mon 'allow r'
Ref: http://www.sebastien-han.fr/blog/2013/07/26/ceph-update-cephx-keys/
The BIG QUESTION though... The data was being stored - I verified this by MD5
summing the data after it was written. But it wasn't being accounted for
anywhere and the permissions system looks to have failed. This looks like a
big security hole, surely a permissions denied error should have occurred here?
Also the data was being stored, but didn't appear in any DF commands, and
couldn't be seen using 'ceph --cluster <name> ls -p <pool name>'?
Thanks
Alex
----- Original Message -----
From: "Alex Pearson" <[email protected]>
To: [email protected]
Sent: Thursday, 2 January, 2014 7:26:45 PM
Subject: [ceph-users] CephFS files not appearing in DF (or rados ls)
Hi All,
I've built a fairly standard ceph cluster up (I think), and believe I have
everything configured correctly with MDS, only I'm seeing something very
strange. Files I write with CephFS don't appear in ANY pools at all?
For example, below shows the configured pools and MDS setup correctly to share
them:
root@osh1:~# ceph --cluster apics osd dump|egrep "data|media3"
pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num
64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 8 'media3' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 600 pgp_num 600 last_change 189 owner 0
pool 9 'metadata' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
pg_num 200 pgp_num 200 last_change 208 owner 0
root@osh1:~# ceph --cluster apics mds dump|grep pool
dumped mdsmap epoch 216
data_pools 0,8,10,11
metadata_pool 9
root@cuckoo:/mnt/ceph# mount|grep ceph
10.30.10.101:/ on /mnt/ceph type ceph (name=cuckoo,key=client.cuckoo)
root@cuckoo:/mnt/ceph# cephfs . show_layout
layout.data_pool: 0
layout.object_size: 4194304
layout.stripe_unit: 4194304
layout.stripe_count: 1
root@cuckoo:/mnt/ceph# cephfs ./media3/ show_layout
layout.data_pool: 8
layout.object_size: 4194304
layout.stripe_unit: 4194304
layout.stripe_count: 1
root@cuckoo:/mnt/ceph# cephfs ./staging/ show_layout
layout.data_pool: 10
layout.object_size: 4194304
layout.stripe_unit: 4194304
layout.stripe_count: 1
root@cuckoo:/mnt/ceph# du -h --max-depth=1
406G ./media3
512 ./pictures
33G ./staging
438G .
As you can see there is data in all the pools... But none of the DF commands
show any data!
root@osh1:~# ceph --cluster apics df detail
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
6513G 6295G 87212M 1.31 11037
POOLS:
NAME ID CATEGORY USED %USED OBJECTS READ
WRITE
data 0 - 0 0 0 365
1460
rbd 2 - 0 0 0 0
0
images 5 - 0 0 0 0
0
volumes 6 - 0 0 0 0
0
media3 8 - 0 0 0 0
0
metadata 9 - 122M 0 113 73
37700
staging 10 - 43332M 0.65 10924 45038
250k
pictures 11 - 0 0 0 0
0
root@osh1:~# rados --cluster apics df
pool name category KB objects clones
degraded unfound rd rd KB wr wr KB
data - 0 0 0
0 0 365 1489281 1460 2978562
images - 0 0 0
0 0 0 0 0 0
media3 - 0 0 0
0 0 0 0 0 0
metadata - 125108 113 0
0 0 73 201 37700 1529366
pictures - 0 0 0
0 0 0 0 0 0
rbd - 0 0 0
0 0 0 0 0 0
staging - 44372699 10924 0
0 0 45038 150190504 256900 127373372
volumes - 0 0 0
0 0 0 0 0 0
total used 89305432 11037
total avail 6601331080
total space 6829990748
What am I missing? I know it must be something basic, but I can't for the life
of me figure it out. I did rebuild MDS from scratch using 'ceph mds newfs' at
one point whilst building the cluster - there aren't any steps I've missed
following that? Any detailed docs on MDS would be appreciated too!
Thanks in advance for any help
Cheers
Alex
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com