On 21/04/15 13:43, Neville wrote:
To test this further I tried the following basic tests:
On Host 2:
root@devops-kvm02:/var/lib/nova/instances
<mailto:root@devops-kvm02:/var/lib/nova/instances># echo hello > test
root@devops-kvm02:/var/lib/nova/instances
<mailto:root@devops-kvm02:/var/lib/nova/instances># cat test
hello
root@devops-kvm02:/var/lib/nova/instances
<mailto:root@devops-kvm02:/var/lib/nova/instances>#
Then from Host 1:
root@devops-kvm01:/var/lib/nova/instances
<mailto:root@devops-kvm01:/var/lib/nova/instances># cat test
cat: test: Operation not permitted
root@devops-kvm01:/var/lib/nova/instances
<mailto:root@devops-kvm01:/var/lib/nova/instances>#
Then back on Host 2:
root@devops-kvm02:/var/lib/nova/instances
<mailto:root@devops-kvm02:/var/lib/nova/instances># cat test
cat: test: Operation not permitted
root@devops-kvm02:/var/lib/nova/instances
<mailto:root@devops-kvm02:/var/lib/nova/instances>#
Should this even work? My understanding is CephFS allows concurrent
access but I'm not sure if there is some file locking going on that I
need to understand.
You might want to check your OSD authentication keys for the client
hosts. The results above seem consistent with settings that forbid the
clients from reading objects from the CephFS data pool (kvm02 can
initially read because it has its written data in cache). Perhaps your
hosts have keys set up that explicitly limit their access to the RBD
pools, and don't take account of the CephFS data pool.
John
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com