PS Now I stop this mds, active migrated and warning removed. Cannot try more.

Dzianis Kahanovich пишет:
> John Spray пишет:
> 
>>> Looks happened both time at night - probably on long backup/write operations
>>> (something like compressed local root backup to cephfs). Also all local 
>>> mounts
>>> inside cluster (fuse) moved to automout to reduce clients pressure. Still 5
>>> permanent kernel clients.
>>>
>>> Now I remount all but 1 kernel clients. Message persist.
>>
>> There's probably a reason you haven't already done this, but the next
>> logical debug step would be to try unmounting that last kernel client
>> (and mention what version it is)
> 
> 4.5.0. This VM now finally was deadlocked in some places (may be there problem
> from same roots) and hard restarted, now mounted again. Message persists.
> 
> Just near week ago I remove some of additional mount options. Started from old
> days (when VMs was on same servers with cluster) I mounts with
> "wsize=131072,rsize=131072,write_congestion_kb=128,readdir_max_bytes=131072"
> (and net.ipv4.tcp_notsent_lowat = 131072) to conserve RAM. Obtaining good
> servers for VMs I remove it. May be better turn it back for better congestion
> quantum.
> 


-- 
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to