在 2019/8/24 上午9:59, Hugh Dickins 写道:
> On Wed, 21 Aug 2019, Alex Shi wrote:
>> 在 2019/8/21 上午2:24, Hugh Dickins 写道:
>>> I'll set aside what I'm doing, and switch to rebasing ours to v5.3-rc
>>> and/or mmotm. Then compare with what Alex has, to see if there's any
>>> good reason to prefer one
在 2019/8/22 下午11:20, Daniel Jordan 写道:
>>
>>>
>>> https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice>
>>> It's also synthetic but it stresses lru_lock more than just anon
>>> alloc/free. It hits the page activate path, which is where we see
在 2019/8/26 下午4:39, Konstantin Khlebnikov 写道:
>>>
> because they mixes pages from different cgroups.
>
> pagevec_lru_move_fn and friends need better implementation:
> either sorting pages or splitting vectores in per-lruvec basis.
Right, this should be the next step to improve. Maybe we could
On 22/08/2019 18.20, Daniel Jordan wrote:
On 8/22/19 7:56 AM, Alex Shi wrote:
在 2019/8/22 上午2:00, Daniel Jordan 写道:
https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice>
It's also synthetic but it stresses lru_lock more than just anon
On Wed, 21 Aug 2019, Alex Shi wrote:
> 在 2019/8/21 上午2:24, Hugh Dickins 写道:
> > I'll set aside what I'm doing, and switch to rebasing ours to v5.3-rc
> > and/or mmotm. Then compare with what Alex has, to see if there's any
> > good reason to prefer one to the other: if no good reason to prefer
On 8/22/19 7:56 AM, Alex Shi wrote:
在 2019/8/22 上午2:00, Daniel Jordan 写道:
https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice>
It's also synthetic but it stresses lru_lock more than just anon alloc/free.
It hits the page activate path, which
在 2019/8/22 上午2:00, Daniel Jordan 写道:
>>
>
> This is system-wide right, not per container? Even per container, 89 usec
> isn't much contention over 20 seconds. You may want to give this a try:
yes, perf lock show the host info.
>
>
>
Hi Alex,
On 8/20/19 5:48 AM, Alex Shi wrote:
In some data center, containers are used widely to deploy different kind
of services, then multiple memcgs share per node pgdat->lru_lock which
cause heavy lock contentions when doing lru operation.
On my 2 socket * 6 cores E5-2630 platform, 24
在 2019/8/21 上午2:24, Hugh Dickins 写道:
> I'll set aside what I'm doing, and switch to rebasing ours to v5.3-rc
> and/or mmotm. Then compare with what Alex has, to see if there's any
> good reason to prefer one to the other: if no good reason to prefer ours,
> I doubt we shall bother to repost,
>
> Thanks for the Cc Michal. As Shakeel says, Google prodkernel has been
> using our per-memcg lru locks for 7 years or so. Yes, we did not come
> up with supporting performance data at the time of posting, nor since:
> I see Alex has done much better on that (though I haven't even glanced
On Tue, Aug 20, 2019 at 3:45 AM Michal Hocko wrote:
>
> On Tue 20-08-19 17:48:23, Alex Shi wrote:
> > This patchset move lru_lock into lruvec, give a lru_lock for each of
> > lruvec, thus bring a lru_lock for each of memcg.
> >
> > Per memcg lru_lock would ease the lru_lock contention a lot in
>
On Tue 20-08-19 17:48:23, Alex Shi wrote:
> This patchset move lru_lock into lruvec, give a lru_lock for each of
> lruvec, thus bring a lru_lock for each of memcg.
>
> Per memcg lru_lock would ease the lru_lock contention a lot in
> this patch series.
>
> In some data center, containers are used
This patchset move lru_lock into lruvec, give a lru_lock for each of
lruvec, thus bring a lru_lock for each of memcg.
Per memcg lru_lock would ease the lru_lock contention a lot in
this patch series.
In some data center, containers are used widely to deploy different kind
of services, then
13 matches
Mail list logo