On Mon, Jun 8, 2015 at 5:20 PM, Francois Lafont <flafdiv...@free.fr> wrote:
> Hi,
>
> On 27/05/2015 22:34, Gregory Farnum wrote:
>
>> Sorry for the delay; I've been traveling.
>
> No problem, me too, I'm not really fast to answer. ;)
>
>>> Ok, I see. According to the online documentation, the way to close
>>> a cephfs client session is:
>>>
>>> ceph daemon mds.$id session ls             # to get the $session_id and the 
>>> $address
>>> ceph osd blacklist add $address
>>> ceph osd dump                              # to get the $epoch
>>> ceph daemon mds.$id osdmap barrier $epoch
>>> ceph daemon mds.$id session evict $session_id
>>>
>>> Is it correct?
>>>
>>> With the commands above, could I reproduce the client freeze in my testing
>>> cluster?
>>
>> Yes, I believe so.
>
> In fact, after some tests, the commands above evicts correctly the client
> (`ceph daemon mds.1 session ls` returns an empty array) but in the client
> side a new connection is automatically established as soon as the cephfs
> mountpoint is requested.

What do you mean, as soon as it's requested? The session evict is a
"polite" close, yes, and there's nothing blocking future mounts if you
try and do it again or if you don't have any caps...but if you have
open files I'd expect things to get stuck. Maybe I'm overlooking
something.

> In fact, I haven't succeeded in reproducing the
> freeze. ;) I have tried to stop the network in the client side (ifdown -a)
> and after few minutes (more than 60 seconds though), I have seen in the
> mds log "closing stale session client". But after a `ifup -a`, I have
> get back a cephfs connection and a mountpoint in good health.

Was this while you were doing writes to the filesystem, or was it
idle? I don't remember all the mechanisms in great detail but if the
mount is totally idle I'd expect it to behave much differently from
one where you have files open and being written to.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to