Please ignore the message below, it has nothing to do with ceph.

Sorry for the spam.

Best regards,

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: ceph-users <[email protected]> on behalf of Frank 
Schilder <[email protected]>
Sent: 17 June 2019 20:33
To: [email protected]
Subject: [ceph-users] ceph fs: stat fails on folder

We observe the following on ceph fs clients with identical ceph fs mounts:

[frans@sophia1 ~]$ ls -l ../neda
ls: cannot access ../neda/NEWA_TEST: Permission denied
total 5
drwxrwxr-x 1 neda neda    1 May 17 19:30 ffpy_test
-rw-rw-r-- 1 neda neda  135 May 17 21:06 mount_newa
drwxrwxr-x 1 neda neda    1 Jun  6 15:39 neda
drwxrwx--- 1 neda neda 1405 Jun 13 15:25 NEWA
d????????? ? ?    ?       ?            ? NEWA_TEST
-rw-rw-r-- 1 neda neda 3671 Jun  3 15:37 test_post.py
-rw-r--r-- 1 neda neda  211 May 17 20:28 test_sophia.slurm

[frans@sn440 ~]$ ls -l ../neda
total 5
drwxrwxr-x 1 neda neda    1 May 17 19:30 ffpy_test
-rw-rw-r-- 1 neda neda  135 May 17 21:06 mount_newa
drwxrwxr-x 1 neda neda    1 Jun  6 15:39 neda
drwxrwx--- 1 neda neda 1405 Jun 13 15:25 NEWA
drwxrwxr-x 1 neda neda    0 May 17 18:58 NEWA_TEST
-rw-rw-r-- 1 neda neda 3671 Jun  3 15:37 test_post.py
-rw-r--r-- 1 neda neda  211 May 17 20:28 test_sophia.slurm

On sophia1 'stat ../neda/NEWA_TEST' returns with permission denied while we see 
no problem on any of our other clients. I guess temporarily evicting the client 
or failing over the MDS will restore access from this client. However, sophia1 
is a head node of an HPC cluster and I would really like to avoid clearing the 
client cache - if possible.

There is no urgent pressure to fix this and I can collect some debug info in 
case this is a yet unknown issue. Please let me know what information to 
collect and how to proceed.

The storage cluster is still at 13.2.2 (upgrade planned):
[root@ceph-01 ~]# ceph -v
ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)

and the client is at 12.2.11 (upgrade to mimic planned):
[frans@sophia1 ~]$ ceph -v
ceph version 12.2.11 (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous 
(stable)

I can't see anything unusual in the logs or health reports.

Thanks for your help!

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to