Clients are almost all idle, very little load on the cluster. I can see no 
errors or warnings in the client logs when the file share is unmounted. Thx!

> On Oct 4, 2016, at 10:31 PM, Yan, Zheng <[email protected]> wrote:
> 
>> On Tue, Oct 4, 2016 at 11:30 PM, John Spray <[email protected]> wrote:
>>> On Tue, Oct 4, 2016 at 5:09 PM, Stephen Horton <[email protected]> wrote:
>>> Thank you John. Both my Openstack hosts and the VMs are all running 
>>> 4.4.0-38-generic #57-Ubuntu SMP x86_64. I can see no evidence that any of 
>>> the VMs are holding large numbers of files open. If this is likely a client 
>>> bug, is there some process I can follow to file a bug report?
>> 
>> It might be worthwhile to file a bug report with Ubuntu, as they'd be
>> the ones who would ideally backport fixes to their stable kernels (in
>> this instance it's hard to know if this is a bug in the latest kernel
>> code or something fixed since 4.4).
>> 
>> It would be really useful if you could try installing the latest
>> released kernel on the clients and see if the issue persists: if so
>> then a ticket on tracker.ceph.com will be a priority for us to fix.
>> 
>> CCing Zheng -- are there any noteworthy fixes between 4.4 and latest
>> kernel that might be relevant?
> 
> No bug found in this area since 4.4
> 
>> 
>> John
>> 
>> 
>>> 
>>>>> On Oct 4, 2016, at 9:39 AM, John Spray <[email protected]> wrote:
>>>>> 
>>>>> On Tue, Oct 4, 2016 at 4:27 PM, Stephen Horton <[email protected]> wrote:
>>>>> Adding that all of my ceph components are version:
>>>>> 10.2.2-0ubuntu0.16.04.2
>>>>> 
>>>>> Openstack is Mitaka on Ubuntu 16.04x. Manila file share is 
>>>>> 1:2.0.0-0ubuntu1
>>>>> 
>>>>> My scenario is that I have a 3-node ceph cluster running openstack 
>>>>> mitaka. Each node has 256gb ram, 14tb raid 5 array. I have 30 VMs running 
>>>>> in openstack; all are mounted to the Manila file share using cephfs 
>>>>> native kernel client driver. Each VM user has put 10-20 gb of files on 
>>>>> the share, but most of this is back-up, so IO requirement is very low. 
>>>>> However, I initially tried using ceph-fuse but performance latency was 
>>>>> poor. Moving to kernel client driver for mounting the share has improved 
>>>>> performance greatly. However, I am getting the cache pressure issue.
>>>> 
>>>> Aside: bear in mind that the kernel client doesn't support quotas, so
>>>> any size limits you set on your Manila shares won't be respected.
>>>> 
>>>>> Can someone help me with the math to properly size the mds cache? How do 
>>>>> I know if the cache size is too small (I think very few files in-use at 
>>>>> any given time) versus the clients are broken and not releasing cache 
>>>>> properly?
>>>> 
>>>> It's almost never the case that your cache is too small unless your
>>>> workload is holding a silly number of files open at one time -- assume
>>>> this is a client bug (although some people work around it by creating
>>>> much bigger MDS caches!)
>>>> 
>>>> You've mentioned the versions of openstack/ubuntu/ceph, but what
>>>> kernel are you running?
>>>> 
> 
> When the warnings happen?(client is idle or client is under load). Are
> there any kernel warning when umounting the client that emitted the
> warning?
> 
> Regards
> Yan, Zheng
> 
> 
>>>> John
>>>> 
>>>>> Thank you!
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> [email protected]
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> 
>> _______________________________________________
>> ceph-users mailing list
>> [email protected]
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to