On Fri, Dec 22, 2017 at 3:20 AM, Yan, Zheng wrote:
> idle client shouldn't hold so many caps.
>
i'll try to make it reproducible for you to test.
yes. For now, it's better to run "echo 3 >/proc/sys/vm/drop_caches"
> after cronjob finishes
Thanks. I'll adopt that for now.
On Thu, Dec 21, 2017 at 11:46 PM, Webert de Souza Lima
wrote:
> Hello Zheng,
>
> Thanks for opening that issue on the bug tracker.
>
> Also thanks for that tip. Caps dropped from 1.6M to 600k for that client.
idle client shouldn't hold so many caps.
> Is it safe to run in
Hello Zheng,
Thanks for opening that issue on the bug tracker.
Also thanks for that tip. Caps dropped from 1.6M to 600k for that client.
Is it safe to run in a cronjob? Let's say, once or twice a day during
production?
Thanks!
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo
On Thu, Dec 21, 2017 at 7:33 PM, Webert de Souza Lima
wrote:
> I have upgraded the kernel on a client node (one that has close-to-zero
> traffic) used for tests.
>
>{
> "reconnecting" : false,
> "id" : 1620266,
> "num_leases" : 0,
> "inst" :
I have upgraded the kernel on a client node (one that has close-to-zero
traffic) used for tests.
{
"reconnecting" : false,
"id" : 1620266,
"num_leases" : 0,
"inst" : "client.1620266 10.0.0.111:0/3921220890",
"state" : "open",
"completed_requests" : 0,
So,
On Fri, Dec 15, 2017 at 10:58 AM, Yan, Zheng wrote:
>
> 300k are ready quite a lot. opening them requires long time. does you
> mail server really open so many files?
Yes, probably. It's a commercial solution. A few thousand domains, dozens
of thousands of users and god
Thanks
On Fri, Dec 15, 2017 at 10:46 AM, Yan, Zheng wrote:
> recent
> version kernel client and ceph-fuse should trim they cache
> aggressively when mds recovers.
>
So the bug (not sure if I can call it a bug) is already fixed in newer
kernel? Can I just update the kernel
On Fri, Dec 15, 2017 at 8:46 PM, Yan, Zheng wrote:
> On Fri, Dec 15, 2017 at 6:54 PM, Webert de Souza Lima
> wrote:
>> Hello, Mr. Yan
>>
>> On Thu, Dec 14, 2017 at 11:36 PM, Yan, Zheng wrote:
>>>
>>>
>>> The client hold so many
On Fri, Dec 15, 2017 at 6:54 PM, Webert de Souza Lima
wrote:
> Hello, Mr. Yan
>
> On Thu, Dec 14, 2017 at 11:36 PM, Yan, Zheng wrote:
>>
>>
>> The client hold so many capabilities because kernel keeps lots of
>> inodes in its cache. Kernel does not trim
Hello, Mr. Yan
On Thu, Dec 14, 2017 at 11:36 PM, Yan, Zheng wrote:
>
> The client hold so many capabilities because kernel keeps lots of
> inodes in its cache. Kernel does not trim inodes by itself if it has
> no memory pressure. It seems you have set mds_cache_size config to
>
> So, questions: does that really matter? What are possible impacts? What
> could have caused this 2 hosts to hold so many capabilities?
> 1 of the hosts are for tests purposes, traffic is close to zero. The other
> host wasn't using cephfs at all. All services stopped.
>
The reason might be
On Thu, Dec 14, 2017 at 4:44 PM, Webert de Souza Lima
wrote:
> Hi Patrick,
>
> On Thu, Dec 14, 2017 at 7:52 PM, Patrick Donnelly
> wrote:
>>
>>
>> It's likely you're a victim of a kernel backport that removed a dentry
>> invalidation mechanism for FUSE
On Fri, Dec 15, 2017 at 1:18 AM, Webert de Souza Lima
wrote:
> Hi,
>
> I've been look at ceph mds perf counters and I saw the one of my clusters
> was hugely different from other in number of caps:
>
> rlat inos caps | hsr hcs hcr | writ read actv | recd recy stry
Hi Patrick,
On Thu, Dec 14, 2017 at 7:52 PM, Patrick Donnelly
wrote:
>
> It's likely you're a victim of a kernel backport that removed a dentry
> invalidation mechanism for FUSE mounts. The result is that ceph-fuse
> can't trim dentries.
>
even though I'm not using FUSE?
On Thu, Dec 14, 2017 at 9:18 AM, Webert de Souza Lima
wrote:
> So, questions: does that really matter? What are possible impacts? What
> could have caused this 2 hosts to hold so many capabilities?
> 1 of the hosts are for tests purposes, traffic is close to zero. The other
15 matches
Mail list logo