Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-10 Thread Nitesh Narayan Lal


On 10/10/19 3:36 AM, David Hildenbrand wrote:
> On 09.10.19 21:46, Nitesh Narayan Lal wrote:
>> On 10/9/19 12:35 PM, Alexander Duyck wrote:
>>> On Wed, 2019-10-09 at 11:21 -0400, Nitesh Narayan Lal wrote:
 On 10/7/19 1:06 PM, Nitesh Narayan Lal wrote:
[...]
 Ideally I would like to get code review for patches 3 and 4, and spend my
 time addressing issues reported there. The main things I need input on is
 if the solution of allowing the list iterators to be reset is good enough
 to address the compaction issues that were pointed out several releases
 ago or if I have to look for another solution. Also I have changed things
 so that page_reporting.h was split over two files with the new one now
 living in the mm/ folder. By doing that I was hoping to reduce the
 exposure of the internal state of the free-lists so that essentially all
 we end up providing is an interface for the notifier to be used by virtio-
 balloon.
>> If everyone agrees that what you are proposing is the best way to move
>> forward then, by all means, lets go ahead with it. :)
>>
> Sorry, i didn't get to follow the discussion, caught a cold and my body
> is still fighting with the last resistance.

I hope you feel better soon.

>
> Is there any rough summary on how much faster Alexanders approach is
> compared to some external tracking? For external tracking, there is a
> lot of optimization potential as far as I can read, however, I think a
> rough summary should be possible by now "how far we are off".
>
> Also, are there benchmarks/setups where both perform the same?

So I tried to follow up on the suggestion provided by Alexander to
recreate his setup and with the posted v12, I did observe a drop in
will-it-scale/page_fault. Specifically in the number of threads that were
launched on the nth core.

However, I did not see that degradation after making the changes which I
suggested previously on top of v12.

After those changes as per my observation, both series are introducing more or
less the same amount of degradation over an unmodified kernel.

In any case, if there are more suggestions, I am open to performing more 
experiments
to ensure that there is no further degradation with my series.

-- 
Thanks
Nitesh



Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-10 Thread David Hildenbrand
On 09.10.19 21:46, Nitesh Narayan Lal wrote:
> 
> On 10/9/19 12:35 PM, Alexander Duyck wrote:
>> On Wed, 2019-10-09 at 11:21 -0400, Nitesh Narayan Lal wrote:
>>> On 10/7/19 1:06 PM, Nitesh Narayan Lal wrote:
>>> [...]
> So what was the size of your guest? One thing that just occurred to me is
> that you might be running a much smaller guest than I was.
 I am running a 30 GB guest.

>>>  If so I would have expected a much higher difference versus
>>> baseline as zeroing/faulting the pages in the host gets expensive fairly
>>> quick. What is the host kernel you are running your test on? I'm just
>>> wondering if there is some additional overhead currently limiting your
>>> setup. My host kernel was just the same kernel I was running in the 
>>> guest,
>>> just built without the patches applied.
>> Right now I have a different host-kernel. I can install the same kernel 
>> to the
>> host as well and see if that changes anything.
> The host kernel will have a fairly significant impact as I recall. For
> example running a stock CentOS kernel lowered the performance compared to
> running a linux-next kernel. As a result the numbers looked better since
> the overall baseline was lower to begin with as the host OS was
> introducing additional overhead.
 I see in that case I will try by installing the same guest kernel
 to the host as well.
>>> As per your suggestion, I tried replacing the host kernel with an
>>> upstream kernel without my patches i.e., my host has a kernel built on top
>>> of the upstream kernel's master branch which has Sept 23rd commit and the 
>>> guest
>>> has the same kernel for the no-hinting case and same kernel + my patches
>>> for the page reporting case.
>>>
>>> With the changes reported earlier on top of v12, I am not seeing any further
>>> degradation (other than what I have previously reported).
>>>
>>> To be sure that THP is actively used, I did an experiment where I changed 
>>> the
>>> MEMSIZE in the page_fault. On doing so THP usage checked via /proc/meminfo 
>>> also
>>> increased as I expected.
>>>
>>> In any case, if you find something else please let me know and I will look 
>>> into it
>>> again.
>>>
>>>
>>> I am still looking into your suggestion about cache line bouncing and will 
>>> reply
>>> to it, if I have more questions.
>>>
>>>
>>> [...]
>> I really feel like this discussion has gone off course. The idea here is
>> to review this patch set[1] and provide working alternatives if there are
>> issues with the current approach.
> 
> 
> Agreed.
> 
>>
>> The bitmap based approach still has a number of outstanding issues
>> including sparse memory and hotplug which have yet to be addressed.
> 
> True, but I don't think those two are a blocker.
> 
> For sparse zone as we are maintaining the bitmap on a granularity of
> (MAX_ORDER - 2) / (MAX_ORDER - 1) etc. the memory wastage should be
> negligible in most of the cases.
> 
> For memory hotplug/hotremove, I did make sure that I don't break anything.
> Even if a user starts using this feature with page-reporting enabled.
> However, it is true that I don't report or capture any memory added/removed
> thought it.
> 
> Fixing these issues will be an optimization which I will do as I get my basic
> framework ready and in shape.
> 
>>  We can
>> gloss over that, but there is a good chance that resolving those would
>> have potential performance implications. With this most recent change
>> there is now also the fact that it can only really support reporting at
>> one page order so the solution is now much more prone to issues with
>> memory fragmentation than it was before. I would consider the fact that my
>> solution works with multiple page orders while the bitmap approach
>> requires MAX_ORDER - 1 seems like another obvious win for my solution.
> 
> This is just a configuration change and only requires to update
> the macro 'PAGE_REPORTING_MIN_ORDER' to what you are using.
> 
> What order do we want to report could vary based on the
> use case where we are deploying the solution.
> 
> Ideally, this should be configurable maybe at the compile time
> or we can stick with pageblock_order which is originally suggested
> and used by you.
> 
>> Until we can get back to the point where we are comparing apples to apples
>> I would prefer not to benchmark the bitmap solution as without the extra
>> order limitation it was over 20% worse then my solution performance wise..
> 
> Understood.
> However, as I reported previously after making the configuration changes
> on top of v12 posting, I don't see the degradation.
> 
> I will be happy to try out more suggestions to see if the issue is really 
> fixed.
> 
> I have started looking into your concern of cacheline bouncing after
> which I will look into Michal's suggestion of using page-isolation APIs to
> isolate and release pages back. After that, I can decide on
> posting my next series (if it is 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-09 Thread Nitesh Narayan Lal


On 10/9/19 12:35 PM, Alexander Duyck wrote:
> On Wed, 2019-10-09 at 11:21 -0400, Nitesh Narayan Lal wrote:
>> On 10/7/19 1:06 PM, Nitesh Narayan Lal wrote:
>> [...]
 So what was the size of your guest? One thing that just occurred to me is
 that you might be running a much smaller guest than I was.
>>> I am running a 30 GB guest.
>>>
>>  If so I would have expected a much higher difference versus
>> baseline as zeroing/faulting the pages in the host gets expensive fairly
>> quick. What is the host kernel you are running your test on? I'm just
>> wondering if there is some additional overhead currently limiting your
>> setup. My host kernel was just the same kernel I was running in the 
>> guest,
>> just built without the patches applied.
> Right now I have a different host-kernel. I can install the same kernel 
> to the
> host as well and see if that changes anything.
 The host kernel will have a fairly significant impact as I recall. For
 example running a stock CentOS kernel lowered the performance compared to
 running a linux-next kernel. As a result the numbers looked better since
 the overall baseline was lower to begin with as the host OS was
 introducing additional overhead.
>>> I see in that case I will try by installing the same guest kernel
>>> to the host as well.
>> As per your suggestion, I tried replacing the host kernel with an
>> upstream kernel without my patches i.e., my host has a kernel built on top
>> of the upstream kernel's master branch which has Sept 23rd commit and the 
>> guest
>> has the same kernel for the no-hinting case and same kernel + my patches
>> for the page reporting case.
>>
>> With the changes reported earlier on top of v12, I am not seeing any further
>> degradation (other than what I have previously reported).
>>
>> To be sure that THP is actively used, I did an experiment where I changed the
>> MEMSIZE in the page_fault. On doing so THP usage checked via /proc/meminfo 
>> also
>> increased as I expected.
>>
>> In any case, if you find something else please let me know and I will look 
>> into it
>> again.
>>
>>
>> I am still looking into your suggestion about cache line bouncing and will 
>> reply
>> to it, if I have more questions.
>>
>>
>> [...]
> I really feel like this discussion has gone off course. The idea here is
> to review this patch set[1] and provide working alternatives if there are
> issues with the current approach.


Agreed.

>
> The bitmap based approach still has a number of outstanding issues
> including sparse memory and hotplug which have yet to be addressed.

True, but I don't think those two are a blocker.

For sparse zone as we are maintaining the bitmap on a granularity of
(MAX_ORDER - 2) / (MAX_ORDER - 1) etc. the memory wastage should be
negligible in most of the cases.

For memory hotplug/hotremove, I did make sure that I don't break anything.
Even if a user starts using this feature with page-reporting enabled.
However, it is true that I don't report or capture any memory added/removed
thought it.

Fixing these issues will be an optimization which I will do as I get my basic
framework ready and in shape.

>  We can
> gloss over that, but there is a good chance that resolving those would
> have potential performance implications. With this most recent change
> there is now also the fact that it can only really support reporting at
> one page order so the solution is now much more prone to issues with
> memory fragmentation than it was before. I would consider the fact that my
> solution works with multiple page orders while the bitmap approach
> requires MAX_ORDER - 1 seems like another obvious win for my solution.

This is just a configuration change and only requires to update
the macro 'PAGE_REPORTING_MIN_ORDER' to what you are using.

What order do we want to report could vary based on the
use case where we are deploying the solution.

Ideally, this should be configurable maybe at the compile time
or we can stick with pageblock_order which is originally suggested
and used by you.

> Until we can get back to the point where we are comparing apples to apples
> I would prefer not to benchmark the bitmap solution as without the extra
> order limitation it was over 20% worse then my solution performance wise..

Understood.
However, as I reported previously after making the configuration changes
on top of v12 posting, I don't see the degradation.

I will be happy to try out more suggestions to see if the issue is really fixed.

I have started looking into your concern of cacheline bouncing after
which I will look into Michal's suggestion of using page-isolation APIs to
isolate and release pages back. After that, I can decide on
posting my next series (if it is required).

>
> Ideally I would like to get code review for patches 3 and 4, and spend my
> time addressing issues reported there. The main things I need input on is
> if the solution of allowing the list 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-09 Thread Alexander Duyck
On Wed, 2019-10-09 at 13:08 -0400, Nitesh Narayan Lal wrote:
> On 10/9/19 12:50 PM, Alexander Duyck wrote:
> > On Wed, 2019-10-09 at 12:25 -0400, Nitesh Narayan Lal wrote:
> > > On 10/7/19 1:20 PM, Alexander Duyck wrote:
> > > > On Mon, Oct 7, 2019 at 10:07 AM Nitesh Narayan Lal  
> > > > wrote:
> > > > > On 10/7/19 12:27 PM, Alexander Duyck wrote:
> > > > > > On Mon, 2019-10-07 at 12:19 -0400, Nitesh Narayan Lal wrote:
> > > > > > > On 10/7/19 11:33 AM, Alexander Duyck wrote:
> > > > > > > > On Mon, 2019-10-07 at 08:29 -0400, Nitesh Narayan Lal wrote:
> > > > > > > > > On 10/2/19 10:25 AM, Alexander Duyck wrote:
> > > > 
> > > > 
> > > > > > > > > page_reporting.c change:
> > > > > > > > > @@ -101,8 +101,12 @@ static void scan_zone_bitmap(struct 
> > > > > > > > > page_reporting_config
> > > > > > > > > *phconf,
> > > > > > > > > /* Process only if the page is still online */
> > > > > > > > > page = pfn_to_online_page((setbit << 
> > > > > > > > > PAGE_REPORTING_MIN_ORDER) +
> > > > > > > > >   zone->base_pfn);
> > > > > > > > > -   if (!page)
> > > > > > > > > +   if (!page || !PageBuddy(page)) {
> > > > > > > > > +   clear_bit(setbit, zone->bitmap);
> > > > > > > > > +   atomic_dec(>free_pages);
> > > > > > > > > continue;
> > > > > > > > > +   }
> > > > > > > > > 
> > > > > > > > I suspect the zone->free_pages is going to be expensive for you 
> > > > > > > > to deal
> > > > > > > > with. It is a global atomic value and is going to have the 
> > > > > > > > cacheline
> > > > > > > > bouncing that it is contained in. As a result thinks like 
> > > > > > > > setting the
> > > > > > > > bitmap with be more expensive as every tome a CPU increments 
> > > > > > > > free_pages it
> > > > > > > > will likely have to take the cache line containing the bitmap 
> > > > > > > > pointer as
> > > > > > > > well.
> > > > > > > I see I will have to explore this more. I am wondering if there 
> > > > > > > is a way to
> > > > > > > measure this If its effect is not visible in 
> > > > > > > will-it-scale/page_fault1. If
> > > > > > > there is a noticeable amount of degradation, I will have to 
> > > > > > > address this.
> > > > > > If nothing else you might look at seeing if you can split up the
> > > > > > structures so that the bitmap and nr_bits is in a different region
> > > > > > somewhere since those are read-mostly values.
> > > > > ok, I will try to understand the issue and your suggestion.
> > > > > Thank you for bringing this up.
> > > > > 
> > > > > > Also you are now updating the bitmap and free_pages both inside and
> > > > > > outside of the zone lock so that will likely have some impact.
> > > > > So as per your previous suggestion, I have made the bitmap structure
> > > > > object as a rcu protected pointer. So we are safe from that side.
> > > > > The other downside which I can think of is a race where one page
> > > > > trying to increment free_pages and other trying to decrements it.
> > > > > However, being an atomic variable that should not be a problem.
> > > > > Did I miss anything?
> > > > I'm not so much worried about a race as the cache line bouncing
> > > > effect. Basically your notifier combined within this hinting thread
> > > > will likely result in more time spent by the thread that holds the
> > > > lock since it will be trying to access the bitmap to set the bit and
> > > > the free_pages to report the bit, but at the same time you will have
> > > > this thread clearing bits and decrementing the free_pages values.
> > > > 
> > > > One thing you could consider in your worker thread would be to do
> > > > reallocate and replace the bitmap every time you plan to walk it. By
> > > > doing that you would avoid the cacheline bouncing on the bitmap since
> > > > you would only have to read it, and you would no longer have another
> > > > thread dirtying it. You could essentially reset the free_pages at the
> > > > same time you replace the bitmap. It would need to all happen with the
> > > > zone lock held though when you swap it out.
> > > If I am not mistaken then from what you are suggesting, I will have to 
> > > hold
> > > the zone lock for the entire duration of swap & scan which would be 
> > > costly if
> > > the bitmap is large, isn't? Also, we might end up missing free pages that 
> > > are
> > > getting
> > > freed while we are scanning.
> > You would only need to hold the zone lock when you swap the bitmap. Once
> > it is swapped you wouldn't need to worry about the locking again for
> > bitmap access since your worker thread would be the only one holding the
> > current bitmap. Think of it as a batch clearing of the bits.
> 
> I see.
> 
> > You already end up missing pages freed while scanning since you are doing
> > it linearly.
> 
> I was referring to free pages for whom bits will not be set while we
> 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-09 Thread Nitesh Narayan Lal


On 10/9/19 12:50 PM, Alexander Duyck wrote:
> On Wed, 2019-10-09 at 12:25 -0400, Nitesh Narayan Lal wrote:
>> On 10/7/19 1:20 PM, Alexander Duyck wrote:
>>> On Mon, Oct 7, 2019 at 10:07 AM Nitesh Narayan Lal  
>>> wrote:
 On 10/7/19 12:27 PM, Alexander Duyck wrote:
> On Mon, 2019-10-07 at 12:19 -0400, Nitesh Narayan Lal wrote:
>> On 10/7/19 11:33 AM, Alexander Duyck wrote:
>>> On Mon, 2019-10-07 at 08:29 -0400, Nitesh Narayan Lal wrote:
 On 10/2/19 10:25 AM, Alexander Duyck wrote:
>>> 
>>>
 page_reporting.c change:
 @@ -101,8 +101,12 @@ static void scan_zone_bitmap(struct 
 page_reporting_config
 *phconf,
 /* Process only if the page is still online */
 page = pfn_to_online_page((setbit << 
 PAGE_REPORTING_MIN_ORDER) +
   zone->base_pfn);
 -   if (!page)
 +   if (!page || !PageBuddy(page)) {
 +   clear_bit(setbit, zone->bitmap);
 +   atomic_dec(>free_pages);
 continue;
 +   }

>>> I suspect the zone->free_pages is going to be expensive for you to deal
>>> with. It is a global atomic value and is going to have the cacheline
>>> bouncing that it is contained in. As a result thinks like setting the
>>> bitmap with be more expensive as every tome a CPU increments free_pages 
>>> it
>>> will likely have to take the cache line containing the bitmap pointer as
>>> well.
>> I see I will have to explore this more. I am wondering if there is a way 
>> to
>> measure this If its effect is not visible in will-it-scale/page_fault1. 
>> If
>> there is a noticeable amount of degradation, I will have to address this.
> If nothing else you might look at seeing if you can split up the
> structures so that the bitmap and nr_bits is in a different region
> somewhere since those are read-mostly values.
 ok, I will try to understand the issue and your suggestion.
 Thank you for bringing this up.

> Also you are now updating the bitmap and free_pages both inside and
> outside of the zone lock so that will likely have some impact.
 So as per your previous suggestion, I have made the bitmap structure
 object as a rcu protected pointer. So we are safe from that side.
 The other downside which I can think of is a race where one page
 trying to increment free_pages and other trying to decrements it.
 However, being an atomic variable that should not be a problem.
 Did I miss anything?
>>> I'm not so much worried about a race as the cache line bouncing
>>> effect. Basically your notifier combined within this hinting thread
>>> will likely result in more time spent by the thread that holds the
>>> lock since it will be trying to access the bitmap to set the bit and
>>> the free_pages to report the bit, but at the same time you will have
>>> this thread clearing bits and decrementing the free_pages values.
>>>
>>> One thing you could consider in your worker thread would be to do
>>> reallocate and replace the bitmap every time you plan to walk it. By
>>> doing that you would avoid the cacheline bouncing on the bitmap since
>>> you would only have to read it, and you would no longer have another
>>> thread dirtying it. You could essentially reset the free_pages at the
>>> same time you replace the bitmap. It would need to all happen with the
>>> zone lock held though when you swap it out.
>> If I am not mistaken then from what you are suggesting, I will have to hold
>> the zone lock for the entire duration of swap & scan which would be costly if
>> the bitmap is large, isn't? Also, we might end up missing free pages that are
>> getting
>> freed while we are scanning.
> You would only need to hold the zone lock when you swap the bitmap. Once
> it is swapped you wouldn't need to worry about the locking again for
> bitmap access since your worker thread would be the only one holding the
> current bitmap. Think of it as a batch clearing of the bits.

I see.

>
> You already end up missing pages freed while scanning since you are doing
> it linearly.

I was referring to free pages for whom bits will not be set while we
are doing the batch clearing of the bits.

>
>> As far as free_pages count is concerned, I am thinking if I should
>> replace it with zone->free_area[REPORTING_ORDER].nr_free which is already 
>> there
>> (I still need to explore this in a bit more depth).
>>
>>> - Alex
> So there ends up being two ways you could use nr_free. One is to track it
> the way I did with the number of reported pages being tracked, however
> that requires reducing the count when reported pages are pulled from the
> free_area and identifying reported pages vs unreported ones.
>
> The other option would be to look at converting nr_free 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-09 Thread Alexander Duyck
On Wed, 2019-10-09 at 12:25 -0400, Nitesh Narayan Lal wrote:
> On 10/7/19 1:20 PM, Alexander Duyck wrote:
> > On Mon, Oct 7, 2019 at 10:07 AM Nitesh Narayan Lal  
> > wrote:
> > > On 10/7/19 12:27 PM, Alexander Duyck wrote:
> > > > On Mon, 2019-10-07 at 12:19 -0400, Nitesh Narayan Lal wrote:
> > > > > On 10/7/19 11:33 AM, Alexander Duyck wrote:
> > > > > > On Mon, 2019-10-07 at 08:29 -0400, Nitesh Narayan Lal wrote:
> > > > > > > On 10/2/19 10:25 AM, Alexander Duyck wrote:
> > 
> > 
> > > > > > > page_reporting.c change:
> > > > > > > @@ -101,8 +101,12 @@ static void scan_zone_bitmap(struct 
> > > > > > > page_reporting_config
> > > > > > > *phconf,
> > > > > > > /* Process only if the page is still online */
> > > > > > > page = pfn_to_online_page((setbit << 
> > > > > > > PAGE_REPORTING_MIN_ORDER) +
> > > > > > >   zone->base_pfn);
> > > > > > > -   if (!page)
> > > > > > > +   if (!page || !PageBuddy(page)) {
> > > > > > > +   clear_bit(setbit, zone->bitmap);
> > > > > > > +   atomic_dec(>free_pages);
> > > > > > > continue;
> > > > > > > +   }
> > > > > > > 
> > > > > > I suspect the zone->free_pages is going to be expensive for you to 
> > > > > > deal
> > > > > > with. It is a global atomic value and is going to have the cacheline
> > > > > > bouncing that it is contained in. As a result thinks like setting 
> > > > > > the
> > > > > > bitmap with be more expensive as every tome a CPU increments 
> > > > > > free_pages it
> > > > > > will likely have to take the cache line containing the bitmap 
> > > > > > pointer as
> > > > > > well.
> > > > > I see I will have to explore this more. I am wondering if there is a 
> > > > > way to
> > > > > measure this If its effect is not visible in 
> > > > > will-it-scale/page_fault1. If
> > > > > there is a noticeable amount of degradation, I will have to address 
> > > > > this.
> > > > If nothing else you might look at seeing if you can split up the
> > > > structures so that the bitmap and nr_bits is in a different region
> > > > somewhere since those are read-mostly values.
> > > ok, I will try to understand the issue and your suggestion.
> > > Thank you for bringing this up.
> > > 
> > > > Also you are now updating the bitmap and free_pages both inside and
> > > > outside of the zone lock so that will likely have some impact.
> > > So as per your previous suggestion, I have made the bitmap structure
> > > object as a rcu protected pointer. So we are safe from that side.
> > > The other downside which I can think of is a race where one page
> > > trying to increment free_pages and other trying to decrements it.
> > > However, being an atomic variable that should not be a problem.
> > > Did I miss anything?
> > I'm not so much worried about a race as the cache line bouncing
> > effect. Basically your notifier combined within this hinting thread
> > will likely result in more time spent by the thread that holds the
> > lock since it will be trying to access the bitmap to set the bit and
> > the free_pages to report the bit, but at the same time you will have
> > this thread clearing bits and decrementing the free_pages values.
> > 
> > One thing you could consider in your worker thread would be to do
> > reallocate and replace the bitmap every time you plan to walk it. By
> > doing that you would avoid the cacheline bouncing on the bitmap since
> > you would only have to read it, and you would no longer have another
> > thread dirtying it. You could essentially reset the free_pages at the
> > same time you replace the bitmap. It would need to all happen with the
> > zone lock held though when you swap it out.
> 
> If I am not mistaken then from what you are suggesting, I will have to hold
> the zone lock for the entire duration of swap & scan which would be costly if
> the bitmap is large, isn't? Also, we might end up missing free pages that are
> getting
> freed while we are scanning.

You would only need to hold the zone lock when you swap the bitmap. Once
it is swapped you wouldn't need to worry about the locking again for
bitmap access since your worker thread would be the only one holding the
current bitmap. Think of it as a batch clearing of the bits.

You already end up missing pages freed while scanning since you are doing
it linearly.

> As far as free_pages count is concerned, I am thinking if I should
> replace it with zone->free_area[REPORTING_ORDER].nr_free which is already 
> there
> (I still need to explore this in a bit more depth).
> 
> > - Alex

So there ends up being two ways you could use nr_free. One is to track it
the way I did with the number of reported pages being tracked, however
that requires reducing the count when reported pages are pulled from the
free_area and identifying reported pages vs unreported ones.

The other option would be to look at converting 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-09 Thread Alexander Duyck
On Wed, 2019-10-09 at 11:21 -0400, Nitesh Narayan Lal wrote:
> On 10/7/19 1:06 PM, Nitesh Narayan Lal wrote:
> [...]
> > > So what was the size of your guest? One thing that just occurred to me is
> > > that you might be running a much smaller guest than I was.
> > I am running a 30 GB guest.
> > 
> > > > >  If so I would have expected a much higher difference versus
> > > > > baseline as zeroing/faulting the pages in the host gets expensive 
> > > > > fairly
> > > > > quick. What is the host kernel you are running your test on? I'm just
> > > > > wondering if there is some additional overhead currently limiting your
> > > > > setup. My host kernel was just the same kernel I was running in the 
> > > > > guest,
> > > > > just built without the patches applied.
> > > > Right now I have a different host-kernel. I can install the same kernel 
> > > > to the
> > > > host as well and see if that changes anything.
> > > The host kernel will have a fairly significant impact as I recall. For
> > > example running a stock CentOS kernel lowered the performance compared to
> > > running a linux-next kernel. As a result the numbers looked better since
> > > the overall baseline was lower to begin with as the host OS was
> > > introducing additional overhead.
> > I see in that case I will try by installing the same guest kernel
> > to the host as well.
> 
> As per your suggestion, I tried replacing the host kernel with an
> upstream kernel without my patches i.e., my host has a kernel built on top
> of the upstream kernel's master branch which has Sept 23rd commit and the 
> guest
> has the same kernel for the no-hinting case and same kernel + my patches
> for the page reporting case.
> 
> With the changes reported earlier on top of v12, I am not seeing any further
> degradation (other than what I have previously reported).
> 
> To be sure that THP is actively used, I did an experiment where I changed the
> MEMSIZE in the page_fault. On doing so THP usage checked via /proc/meminfo 
> also
> increased as I expected.
> 
> In any case, if you find something else please let me know and I will look 
> into it
> again.
> 
> 
> I am still looking into your suggestion about cache line bouncing and will 
> reply
> to it, if I have more questions.
> 
> 
> [...]

I really feel like this discussion has gone off course. The idea here is
to review this patch set[1] and provide working alternatives if there are
issues with the current approach.

The bitmap based approach still has a number of outstanding issues
including sparse memory and hotplug which have yet to be addressed. We can
gloss over that, but there is a good chance that resolving those would
have potential performance implications. With this most recent change
there is now also the fact that it can only really support reporting at
one page order so the solution is now much more prone to issues with
memory fragmentation than it was before. I would consider the fact that my
solution works with multiple page orders while the bitmap approach
requires MAX_ORDER - 1 seems like another obvious win for my solution.
Until we can get back to the point where we are comparing apples to apples
I would prefer not to benchmark the bitmap solution as without the extra
order limitation it was over 20% worse then my solution performance wise.

Ideally I would like to get code review for patches 3 and 4, and spend my
time addressing issues reported there. The main things I need input on is
if the solution of allowing the list iterators to be reset is good enough
to address the compaction issues that were pointed out several releases
ago or if I have to look for another solution. Also I have changed things
so that page_reporting.h was split over two files with the new one now
living in the mm/ folder. By doing that I was hoping to reduce the
exposure of the internal state of the free-lists so that essentially all
we end up providing is an interface for the notifier to be used by virtio-
balloon.

Thanks.

- Alex

[1]: 
https://lore.kernel.org/lkml/20191001152441.27008.99285.stgit@localhost.localdomain/



Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-09 Thread Nitesh Narayan Lal


On 10/7/19 1:20 PM, Alexander Duyck wrote:
> On Mon, Oct 7, 2019 at 10:07 AM Nitesh Narayan Lal  wrote:
>>
>> On 10/7/19 12:27 PM, Alexander Duyck wrote:
>>> On Mon, 2019-10-07 at 12:19 -0400, Nitesh Narayan Lal wrote:
 On 10/7/19 11:33 AM, Alexander Duyck wrote:
> On Mon, 2019-10-07 at 08:29 -0400, Nitesh Narayan Lal wrote:
>> On 10/2/19 10:25 AM, Alexander Duyck wrote:
> 
>
>> page_reporting.c change:
>> @@ -101,8 +101,12 @@ static void scan_zone_bitmap(struct 
>> page_reporting_config
>> *phconf,
>> /* Process only if the page is still online */
>> page = pfn_to_online_page((setbit << 
>> PAGE_REPORTING_MIN_ORDER) +
>>   zone->base_pfn);
>> -   if (!page)
>> +   if (!page || !PageBuddy(page)) {
>> +   clear_bit(setbit, zone->bitmap);
>> +   atomic_dec(>free_pages);
>> continue;
>> +   }
>>
> I suspect the zone->free_pages is going to be expensive for you to deal
> with. It is a global atomic value and is going to have the cacheline
> bouncing that it is contained in. As a result thinks like setting the
> bitmap with be more expensive as every tome a CPU increments free_pages it
> will likely have to take the cache line containing the bitmap pointer as
> well.
 I see I will have to explore this more. I am wondering if there is a way to
 measure this If its effect is not visible in will-it-scale/page_fault1. If
 there is a noticeable amount of degradation, I will have to address this.
>>> If nothing else you might look at seeing if you can split up the
>>> structures so that the bitmap and nr_bits is in a different region
>>> somewhere since those are read-mostly values.
>> ok, I will try to understand the issue and your suggestion.
>> Thank you for bringing this up.
>>
>>> Also you are now updating the bitmap and free_pages both inside and
>>> outside of the zone lock so that will likely have some impact.
>> So as per your previous suggestion, I have made the bitmap structure
>> object as a rcu protected pointer. So we are safe from that side.
>> The other downside which I can think of is a race where one page
>> trying to increment free_pages and other trying to decrements it.
>> However, being an atomic variable that should not be a problem.
>> Did I miss anything?
> I'm not so much worried about a race as the cache line bouncing
> effect. Basically your notifier combined within this hinting thread
> will likely result in more time spent by the thread that holds the
> lock since it will be trying to access the bitmap to set the bit and
> the free_pages to report the bit, but at the same time you will have
> this thread clearing bits and decrementing the free_pages values.
>
> One thing you could consider in your worker thread would be to do
> reallocate and replace the bitmap every time you plan to walk it. By
> doing that you would avoid the cacheline bouncing on the bitmap since
> you would only have to read it, and you would no longer have another
> thread dirtying it. You could essentially reset the free_pages at the
> same time you replace the bitmap. It would need to all happen with the
> zone lock held though when you swap it out.

If I am not mistaken then from what you are suggesting, I will have to hold
the zone lock for the entire duration of swap & scan which would be costly if
the bitmap is large, isn't? Also, we might end up missing free pages that are
getting
freed while we are scanning.

As far as free_pages count is concerned, I am thinking if I should
replace it with zone->free_area[REPORTING_ORDER].nr_free which is already there
(I still need to explore this in a bit more depth).

>
> - Alex
-- 
Thanks
Nitesh



Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-09 Thread Nitesh Narayan Lal


On 10/7/19 1:06 PM, Nitesh Narayan Lal wrote:
[...]
>> So what was the size of your guest? One thing that just occurred to me is
>> that you might be running a much smaller guest than I was.
> I am running a 30 GB guest.
>
  If so I would have expected a much higher difference versus
 baseline as zeroing/faulting the pages in the host gets expensive fairly
 quick. What is the host kernel you are running your test on? I'm just
 wondering if there is some additional overhead currently limiting your
 setup. My host kernel was just the same kernel I was running in the guest,
 just built without the patches applied.
>>> Right now I have a different host-kernel. I can install the same kernel to 
>>> the
>>> host as well and see if that changes anything.
>> The host kernel will have a fairly significant impact as I recall. For
>> example running a stock CentOS kernel lowered the performance compared to
>> running a linux-next kernel. As a result the numbers looked better since
>> the overall baseline was lower to begin with as the host OS was
>> introducing additional overhead.
> I see in that case I will try by installing the same guest kernel
> to the host as well.

As per your suggestion, I tried replacing the host kernel with an
upstream kernel without my patches i.e., my host has a kernel built on top
of the upstream kernel's master branch which has Sept 23rd commit and the guest
has the same kernel for the no-hinting case and same kernel + my patches
for the page reporting case.

With the changes reported earlier on top of v12, I am not seeing any further
degradation (other than what I have previously reported).

To be sure that THP is actively used, I did an experiment where I changed the
MEMSIZE in the page_fault. On doing so THP usage checked via /proc/meminfo also
increased as I expected.

In any case, if you find something else please let me know and I will look into 
it
again.


I am still looking into your suggestion about cache line bouncing and will reply
to it, if I have more questions.


[...]



-- 
Thanks
Nitesh



Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-07 Thread Alexander Duyck
On Mon, Oct 7, 2019 at 10:07 AM Nitesh Narayan Lal  wrote:
>
>
> On 10/7/19 12:27 PM, Alexander Duyck wrote:
> > On Mon, 2019-10-07 at 12:19 -0400, Nitesh Narayan Lal wrote:
> >> On 10/7/19 11:33 AM, Alexander Duyck wrote:
> >>> On Mon, 2019-10-07 at 08:29 -0400, Nitesh Narayan Lal wrote:
>  On 10/2/19 10:25 AM, Alexander Duyck wrote:



>  page_reporting.c change:
>  @@ -101,8 +101,12 @@ static void scan_zone_bitmap(struct 
>  page_reporting_config
>  *phconf,
>  /* Process only if the page is still online */
>  page = pfn_to_online_page((setbit << 
>  PAGE_REPORTING_MIN_ORDER) +
>    zone->base_pfn);
>  -   if (!page)
>  +   if (!page || !PageBuddy(page)) {
>  +   clear_bit(setbit, zone->bitmap);
>  +   atomic_dec(>free_pages);
>  continue;
>  +   }
> 
> >>> I suspect the zone->free_pages is going to be expensive for you to deal
> >>> with. It is a global atomic value and is going to have the cacheline
> >>> bouncing that it is contained in. As a result thinks like setting the
> >>> bitmap with be more expensive as every tome a CPU increments free_pages it
> >>> will likely have to take the cache line containing the bitmap pointer as
> >>> well.
> >> I see I will have to explore this more. I am wondering if there is a way to
> >> measure this If its effect is not visible in will-it-scale/page_fault1. If
> >> there is a noticeable amount of degradation, I will have to address this.
> > If nothing else you might look at seeing if you can split up the
> > structures so that the bitmap and nr_bits is in a different region
> > somewhere since those are read-mostly values.
>
> ok, I will try to understand the issue and your suggestion.
> Thank you for bringing this up.
>
> > Also you are now updating the bitmap and free_pages both inside and
> > outside of the zone lock so that will likely have some impact.
>
> So as per your previous suggestion, I have made the bitmap structure
> object as a rcu protected pointer. So we are safe from that side.
> The other downside which I can think of is a race where one page
> trying to increment free_pages and other trying to decrements it.
> However, being an atomic variable that should not be a problem.
> Did I miss anything?

I'm not so much worried about a race as the cache line bouncing
effect. Basically your notifier combined within this hinting thread
will likely result in more time spent by the thread that holds the
lock since it will be trying to access the bitmap to set the bit and
the free_pages to report the bit, but at the same time you will have
this thread clearing bits and decrementing the free_pages values.

One thing you could consider in your worker thread would be to do
reallocate and replace the bitmap every time you plan to walk it. By
doing that you would avoid the cacheline bouncing on the bitmap since
you would only have to read it, and you would no longer have another
thread dirtying it. You could essentially reset the free_pages at the
same time you replace the bitmap. It would need to all happen with the
zone lock held though when you swap it out.

- Alex


Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-07 Thread Nitesh Narayan Lal


On 10/7/19 12:27 PM, Alexander Duyck wrote:
> On Mon, 2019-10-07 at 12:19 -0400, Nitesh Narayan Lal wrote:
>> On 10/7/19 11:33 AM, Alexander Duyck wrote:
>>> On Mon, 2019-10-07 at 08:29 -0400, Nitesh Narayan Lal wrote:
 On 10/2/19 10:25 AM, Alexander Duyck wrote:

>> [...]
 You  don't have to, I can fix the issues in my patch-set. :)
> Sounds good. Hopefully the stuff I pointed out above helps you to get
> a reproduction and resolve the issues.
 So I did observe a significant drop in running my v12 path-set [1] with the
 suggested test setup. However, on making certain changes the performance
 improved significantly.

 I used my v12 patch-set which I have posted earlier and made the following
 changes:
 1. Started reporting only (MAX_ORDER - 1) pages and increased the number of
 pages that can be reported at a time to 32 from 16. The intent of 
 making
 these changes was to bring my configuration closer to what Alexander is
 using.
>>> The increase from 16 to 32 is valid. No point in working in too small of
>>> batches. However tightening the order to only test for MAX_ORDER - 1 seems
>>> like a step in the wrong direction. The bitmap approach doesn't have much
>>> value if it can only work with the highest order page. I realize it is
>>> probably necessary in order to make the trick for checking on page_buddy
>>> work, but it seems very limiting.
>> If using (pageblock_order - 1) is a better way to do this, then I can 
>> probably
>> switch to that.
>> I will agree with the fact that we have to make the reporting order
>> configurable, atleast to an extent.
> I think you mean pageblock_order, not pageblock_order - 1. The problem
> with pageblock_order - 1 is that it will have a negative impact on
> performance as it would disable THP.

Ah, I see. Yes my bad.

>
 2. I made an additional change in my bitmap scanning logic to prevent 
 acquiring
 spinlock if the page is already allocated.
>>> Again, not a fan. It basically means you can only work with MAX_ORDER - 1
>>> and there will be no ability to work with anything smaller.
>>>
 Setup:
 On a 16 vCPU 30 GB single NUMA guest affined to a single host NUMA, I ran 
 the
 modified will-it-scale/page_fault number of times and calculated the 
 average
 of the number of process and threads launched on the 16th core to compare 
 the
 impact of my patch-set against an unmodified kernel.


 Conclusion:
 %Drop in number of processes launched on 16th vCPU = 1-2%
 %Drop in number of threads launched on 16th vCPU = 5-6%
>>> These numbers don't make that much sense to me. Are you talking about a
>>> fully functioning setup that is madvsing away the memory in the
>>> hypervisor?
>> Without making this change I was observing a significant amount of drop
>> in the number of processes and specifically in the number of threads.
>> I did a double-check of the configuration which I have shared.
>> I was also observing the "AnonHugePages" via meminfo to check the THP usage.
>> Any more suggestions about what else I can do to verify?
>> I will be more than happy to try them out.
> So what was the size of your guest? One thing that just occurred to me is
> that you might be running a much smaller guest than I was.

I am running a 30 GB guest.

>
>>>  If so I would have expected a much higher difference versus
>>> baseline as zeroing/faulting the pages in the host gets expensive fairly
>>> quick. What is the host kernel you are running your test on? I'm just
>>> wondering if there is some additional overhead currently limiting your
>>> setup. My host kernel was just the same kernel I was running in the guest,
>>> just built without the patches applied.
>> Right now I have a different host-kernel. I can install the same kernel to 
>> the
>> host as well and see if that changes anything.
> The host kernel will have a fairly significant impact as I recall. For
> example running a stock CentOS kernel lowered the performance compared to
> running a linux-next kernel. As a result the numbers looked better since
> the overall baseline was lower to begin with as the host OS was
> introducing additional overhead.

I see in that case I will try by installing the same guest kernel
to the host as well.

>
 Other observations:
 - I also tried running Alexander's latest v11 page-reporting patch set and
   observe a similar amount of average degradation in the number of 
 processes
   and threads.
 - I didn't include the linear component recorded by will-it-scale because 
 for
   some reason it was fluctuating too much even when I was using an 
 unmodified
   kernel. If required I can investigate this further.

 Note: If there is a better way to analyze the will-it-scale/page_fault 
 results
 then please do let me know.
>>> Honestly I have mostly just focused on the processes 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-07 Thread Alexander Duyck
On Mon, 2019-10-07 at 12:19 -0400, Nitesh Narayan Lal wrote:
> On 10/7/19 11:33 AM, Alexander Duyck wrote:
> > On Mon, 2019-10-07 at 08:29 -0400, Nitesh Narayan Lal wrote:
> > > On 10/2/19 10:25 AM, Alexander Duyck wrote:
> > > 
> [...]
> > > You  don't have to, I can fix the issues in my patch-set. :)
> > > > Sounds good. Hopefully the stuff I pointed out above helps you to get
> > > > a reproduction and resolve the issues.
> > > So I did observe a significant drop in running my v12 path-set [1] with 
> > > the
> > > suggested test setup. However, on making certain changes the performance
> > > improved significantly.
> > > 
> > > I used my v12 patch-set which I have posted earlier and made the following
> > > changes:
> > > 1. Started reporting only (MAX_ORDER - 1) pages and increased the number 
> > > of
> > > pages that can be reported at a time to 32 from 16. The intent of 
> > > making
> > > these changes was to bring my configuration closer to what Alexander 
> > > is
> > > using.
> > The increase from 16 to 32 is valid. No point in working in too small of
> > batches. However tightening the order to only test for MAX_ORDER - 1 seems
> > like a step in the wrong direction. The bitmap approach doesn't have much
> > value if it can only work with the highest order page. I realize it is
> > probably necessary in order to make the trick for checking on page_buddy
> > work, but it seems very limiting.
> 
> If using (pageblock_order - 1) is a better way to do this, then I can probably
> switch to that.
> I will agree with the fact that we have to make the reporting order
> configurable, atleast to an extent.

I think you mean pageblock_order, not pageblock_order - 1. The problem
with pageblock_order - 1 is that it will have a negative impact on
performance as it would disable THP.

> > > 2. I made an additional change in my bitmap scanning logic to prevent 
> > > acquiring
> > > spinlock if the page is already allocated.
> > Again, not a fan. It basically means you can only work with MAX_ORDER - 1
> > and there will be no ability to work with anything smaller.
> > 
> > > Setup:
> > > On a 16 vCPU 30 GB single NUMA guest affined to a single host NUMA, I ran 
> > > the
> > > modified will-it-scale/page_fault number of times and calculated the 
> > > average
> > > of the number of process and threads launched on the 16th core to compare 
> > > the
> > > impact of my patch-set against an unmodified kernel.
> > > 
> > > 
> > > Conclusion:
> > > %Drop in number of processes launched on 16th vCPU = 1-2%
> > > %Drop in number of threads launched on 16th vCPU = 5-6%
> > These numbers don't make that much sense to me. Are you talking about a
> > fully functioning setup that is madvsing away the memory in the
> > hypervisor?
> 
> Without making this change I was observing a significant amount of drop
> in the number of processes and specifically in the number of threads.
> I did a double-check of the configuration which I have shared.
> I was also observing the "AnonHugePages" via meminfo to check the THP usage.
> Any more suggestions about what else I can do to verify?
> I will be more than happy to try them out.

So what was the size of your guest? One thing that just occurred to me is
that you might be running a much smaller guest than I was.

> >  If so I would have expected a much higher difference versus
> > baseline as zeroing/faulting the pages in the host gets expensive fairly
> > quick. What is the host kernel you are running your test on? I'm just
> > wondering if there is some additional overhead currently limiting your
> > setup. My host kernel was just the same kernel I was running in the guest,
> > just built without the patches applied.
> 
> Right now I have a different host-kernel. I can install the same kernel to the
> host as well and see if that changes anything.

The host kernel will have a fairly significant impact as I recall. For
example running a stock CentOS kernel lowered the performance compared to
running a linux-next kernel. As a result the numbers looked better since
the overall baseline was lower to begin with as the host OS was
introducing additional overhead.

> > > Other observations:
> > > - I also tried running Alexander's latest v11 page-reporting patch set and
> > >   observe a similar amount of average degradation in the number of 
> > > processes
> > >   and threads.
> > > - I didn't include the linear component recorded by will-it-scale because 
> > > for
> > >   some reason it was fluctuating too much even when I was using an 
> > > unmodified
> > >   kernel. If required I can investigate this further.
> > > 
> > > Note: If there is a better way to analyze the will-it-scale/page_fault 
> > > results
> > > then please do let me know.
> > Honestly I have mostly just focused on the processes performance.
> 
> In my observation processes seems to be most consistent in general.

Agreed.

> >  There is
> > usually a fair bit of variability but a 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-07 Thread Nitesh Narayan Lal


On 10/7/19 11:33 AM, Alexander Duyck wrote:
> On Mon, 2019-10-07 at 08:29 -0400, Nitesh Narayan Lal wrote:
>> On 10/2/19 10:25 AM, Alexander Duyck wrote:
>>
[...]
>> You  don't have to, I can fix the issues in my patch-set. :)
>>> Sounds good. Hopefully the stuff I pointed out above helps you to get
>>> a reproduction and resolve the issues.
>> So I did observe a significant drop in running my v12 path-set [1] with the
>> suggested test setup. However, on making certain changes the performance
>> improved significantly.
>>
>> I used my v12 patch-set which I have posted earlier and made the following
>> changes:
>> 1. Started reporting only (MAX_ORDER - 1) pages and increased the number of
>> pages that can be reported at a time to 32 from 16. The intent of making
>> these changes was to bring my configuration closer to what Alexander is
>> using.
> The increase from 16 to 32 is valid. No point in working in too small of
> batches. However tightening the order to only test for MAX_ORDER - 1 seems
> like a step in the wrong direction. The bitmap approach doesn't have much
> value if it can only work with the highest order page. I realize it is
> probably necessary in order to make the trick for checking on page_buddy
> work, but it seems very limiting.

If using (pageblock_order - 1) is a better way to do this, then I can probably
switch to that.
I will agree with the fact that we have to make the reporting order
configurable, atleast to an extent.

>
>> 2. I made an additional change in my bitmap scanning logic to prevent 
>> acquiring
>> spinlock if the page is already allocated.
> Again, not a fan. It basically means you can only work with MAX_ORDER - 1
> and there will be no ability to work with anything smaller.
>
>> Setup:
>> On a 16 vCPU 30 GB single NUMA guest affined to a single host NUMA, I ran the
>> modified will-it-scale/page_fault number of times and calculated the average
>> of the number of process and threads launched on the 16th core to compare the
>> impact of my patch-set against an unmodified kernel.
>>
>>
>> Conclusion:
>> %Drop in number of processes launched on 16th vCPU = 1-2%
>> %Drop in number of threads launched on 16th vCPU = 5-6%
> These numbers don't make that much sense to me. Are you talking about a
> fully functioning setup that is madvsing away the memory in the
> hypervisor?


Without making this change I was observing a significant amount of drop
in the number of processes and specifically in the number of threads.
I did a double-check of the configuration which I have shared.
I was also observing the "AnonHugePages" via meminfo to check the THP usage.
Any more suggestions about what else I can do to verify?
I will be more than happy to try them out.

>  If so I would have expected a much higher difference versus
> baseline as zeroing/faulting the pages in the host gets expensive fairly
> quick. What is the host kernel you are running your test on? I'm just
> wondering if there is some additional overhead currently limiting your
> setup. My host kernel was just the same kernel I was running in the guest,
> just built without the patches applied.

Right now I have a different host-kernel. I can install the same kernel to the
host as well and see if that changes anything.

>
>> Other observations:
>> - I also tried running Alexander's latest v11 page-reporting patch set and
>>   observe a similar amount of average degradation in the number of processes
>>   and threads.
>> - I didn't include the linear component recorded by will-it-scale because for
>>   some reason it was fluctuating too much even when I was using an unmodified
>>   kernel. If required I can investigate this further.
>>
>> Note: If there is a better way to analyze the will-it-scale/page_fault 
>> results
>> then please do let me know.
> Honestly I have mostly just focused on the processes performance.

In my observation processes seems to be most consistent in general.

>  There is
> usually a fair bit of variability but a pattern forms after a few runs so
> you can generally tell if a configuration is an improvement or not.

Yeah, that's why I thought of taking the average of 5-6 runs.

>
>> Other setup details:
>> Following are the configurations which I enabled to run my tests:
>> - Enabled: CONFIG_SLAB_FREELIST_RANDOM & CONFIG_SHUFFLE_PAGE_ALLOCATOR
>> - Set host THP to always
>> - Set guest THP to madvise
>> - Added the suggested madvise call in page_fault source code.
>> @Alexander please let me know if I missed something.
> This seems about right.
>
>> The current state of my v13:
>> I still have to look into Michal's suggestion of using page-isolation API's
>> instead of isolating the page. However, I believe at this moment our 
>> objective
>> is to decide with which approach we can proceed and that's why I decided to
>> post the numbers by making small required changes in v12 instead of posting a
>> new series.
>>
>>
>> Following are the changes which I have made on 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-07 Thread Alexander Duyck
On Mon, 2019-10-07 at 08:29 -0400, Nitesh Narayan Lal wrote:
> On 10/2/19 10:25 AM, Alexander Duyck wrote:
> 
> [...]
> > > > My suggestion would be to look at reworking the patch set and
> > > > post numbers for my patch set versus the bitmap approach and we can
> > > > look at them then.
> > > Agreed. However, in order to fix an issue I have to reproduce it first.
> > With the tweak I have suggested above it should make it much easier to
> > reproduce. Basically all you need is to have the allocation competing
> > against hinting. Currently the hinting isn't doing this because the
> > allocations are mostly coming out of 4K pages instead of higher order
> > ones.
> > 
> > Alternatively you could just make the suggestion I had proposed about
> > using spin_lock/unlock_irq in your worker thread and that resolved it
> > for me.
> > 
> > > >  I would prefer not to spend my time fixing and
> > > > tuning a patch set that I am still not convinced is viable.
> > > You  don't have to, I can fix the issues in my patch-set. :)
> > Sounds good. Hopefully the stuff I pointed out above helps you to get
> > a reproduction and resolve the issues.
> 
> So I did observe a significant drop in running my v12 path-set [1] with the
> suggested test setup. However, on making certain changes the performance
> improved significantly.
> 
> I used my v12 patch-set which I have posted earlier and made the following
> changes:
> 1. Started reporting only (MAX_ORDER - 1) pages and increased the number of
> pages that can be reported at a time to 32 from 16. The intent of making
> these changes was to bring my configuration closer to what Alexander is
> using.

The increase from 16 to 32 is valid. No point in working in too small of
batches. However tightening the order to only test for MAX_ORDER - 1 seems
like a step in the wrong direction. The bitmap approach doesn't have much
value if it can only work with the highest order page. I realize it is
probably necessary in order to make the trick for checking on page_buddy
work, but it seems very limiting.

> 2. I made an additional change in my bitmap scanning logic to prevent 
> acquiring
> spinlock if the page is already allocated.

Again, not a fan. It basically means you can only work with MAX_ORDER - 1
and there will be no ability to work with anything smaller.

> 
> Setup:
> On a 16 vCPU 30 GB single NUMA guest affined to a single host NUMA, I ran the
> modified will-it-scale/page_fault number of times and calculated the average
> of the number of process and threads launched on the 16th core to compare the
> impact of my patch-set against an unmodified kernel.
> 
> 
> Conclusion:
> %Drop in number of processes launched on 16th vCPU = 1-2%
> %Drop in number of threads launched on 16th vCPU = 5-6%

These numbers don't make that much sense to me. Are you talking about a
fully functioning setup that is madvsing away the memory in the
hypervisor? If so I would have expected a much higher difference versus
baseline as zeroing/faulting the pages in the host gets expensive fairly
quick. What is the host kernel you are running your test on? I'm just
wondering if there is some additional overhead currently limiting your
setup. My host kernel was just the same kernel I was running in the guest,
just built without the patches applied.

> Other observations:
> - I also tried running Alexander's latest v11 page-reporting patch set and
>   observe a similar amount of average degradation in the number of processes
>   and threads.
> - I didn't include the linear component recorded by will-it-scale because for
>   some reason it was fluctuating too much even when I was using an unmodified
>   kernel. If required I can investigate this further.
> 
> Note: If there is a better way to analyze the will-it-scale/page_fault results
> then please do let me know.

Honestly I have mostly just focused on the processes performance. There is
usually a fair bit of variability but a pattern forms after a few runs so
you can generally tell if a configuration is an improvement or not.

> Other setup details:
> Following are the configurations which I enabled to run my tests:
> - Enabled: CONFIG_SLAB_FREELIST_RANDOM & CONFIG_SHUFFLE_PAGE_ALLOCATOR
> - Set host THP to always
> - Set guest THP to madvise
> - Added the suggested madvise call in page_fault source code.
> @Alexander please let me know if I missed something.

This seems about right.

> The current state of my v13:
> I still have to look into Michal's suggestion of using page-isolation API's
> instead of isolating the page. However, I believe at this moment our objective
> is to decide with which approach we can proceed and that's why I decided to
> post the numbers by making small required changes in v12 instead of posting a
> new series.
> 
> 
> Following are the changes which I have made on top of my v12:
> 
> page_reporting.h change:
> -#define PAGE_REPORTING_MIN_ORDER   (MAX_ORDER - 2)
> -#define 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-07 Thread Nitesh Narayan Lal

On 10/2/19 10:25 AM, Alexander Duyck wrote:

[...]
>>> My suggestion would be to look at reworking the patch set and
>>> post numbers for my patch set versus the bitmap approach and we can
>>> look at them then.
>> Agreed. However, in order to fix an issue I have to reproduce it first.
> With the tweak I have suggested above it should make it much easier to
> reproduce. Basically all you need is to have the allocation competing
> against hinting. Currently the hinting isn't doing this because the
> allocations are mostly coming out of 4K pages instead of higher order
> ones.
>
> Alternatively you could just make the suggestion I had proposed about
> using spin_lock/unlock_irq in your worker thread and that resolved it
> for me.
>
>>>  I would prefer not to spend my time fixing and
>>> tuning a patch set that I am still not convinced is viable.
>> You  don't have to, I can fix the issues in my patch-set. :)
> Sounds good. Hopefully the stuff I pointed out above helps you to get
> a reproduction and resolve the issues.


So I did observe a significant drop in running my v12 path-set [1] with the
suggested test setup. However, on making certain changes the performance
improved significantly.

I used my v12 patch-set which I have posted earlier and made the following
changes:
1. Started reporting only (MAX_ORDER - 1) pages and increased the number of
    pages that can be reported at a time to 32 from 16. The intent of making
    these changes was to bring my configuration closer to what Alexander is
    using.
2. I made an additional change in my bitmap scanning logic to prevent acquiring
    spinlock if the page is already allocated.


Setup:
On a 16 vCPU 30 GB single NUMA guest affined to a single host NUMA, I ran the
modified will-it-scale/page_fault number of times and calculated the average
of the number of process and threads launched on the 16th core to compare the
impact of my patch-set against an unmodified kernel.


Conclusion:
%Drop in number of processes launched on 16th vCPU = 1-2%
%Drop in number of threads launched on 16th vCPU = 5-6%


Other observations:
- I also tried running Alexander's latest v11 page-reporting patch set and
  observe a similar amount of average degradation in the number of processes
  and threads.
- I didn't include the linear component recorded by will-it-scale because for
  some reason it was fluctuating too much even when I was using an unmodified
  kernel. If required I can investigate this further.

Note: If there is a better way to analyze the will-it-scale/page_fault results
then please do let me know.


Other setup details:
Following are the configurations which I enabled to run my tests:
- Enabled: CONFIG_SLAB_FREELIST_RANDOM & CONFIG_SHUFFLE_PAGE_ALLOCATOR
- Set host THP to always
- Set guest THP to madvise
- Added the suggested madvise call in page_fault source code.
@Alexander please let me know if I missed something.


The current state of my v13:
I still have to look into Michal's suggestion of using page-isolation API's
instead of isolating the page. However, I believe at this moment our objective
is to decide with which approach we can proceed and that's why I decided to
post the numbers by making small required changes in v12 instead of posting a
new series.


Following are the changes which I have made on top of my v12:

page_reporting.h change:
-#define PAGE_REPORTING_MIN_ORDER   (MAX_ORDER - 2)
-#define PAGE_REPORTING_MAX_PAGES   16
+#define PAGE_REPORTING_MIN_ORDER  (MAX_ORDER - 1)
+#define PAGE_REPORTING_MAX_PAGES  32

page_reporting.c change:
@@ -101,8 +101,12 @@ static void scan_zone_bitmap(struct page_reporting_config
*phconf,
    /* Process only if the page is still online */
    page = pfn_to_online_page((setbit << PAGE_REPORTING_MIN_ORDER) +
  zone->base_pfn);
-   if (!page)
+   if (!page || !PageBuddy(page)) {
+   clear_bit(setbit, zone->bitmap);
+   atomic_dec(>free_pages);
    continue;
+   }

@Alexander in case you decide to give it a try and find different results,
please do let me know.

[1] https://lore.kernel.org/lkml/20190812131235.27244-1-nit...@redhat.com/


-- 
Thanks
Nitesh


Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-02 Thread Nitesh Narayan Lal


On 10/1/19 4:51 PM, Dave Hansen wrote:
> On 10/1/19 1:49 PM, Alexander Duyck wrote:
>> So it looks like v12 still has issues. I'm pretty sure you should be using
>> spin_lock_irq(), not spin_lock() in page_reporting.c to avoid the
>> possibility of an IRQ firing and causing lock recursion on the zone lock.
> Lockdep should make all of this a lot easier to find.  Is it being used?

I do have it in the function which returns the pages to the buddy but I missed
it in the function that isolates the pages.
I will correct this.


-- 
Thanks
Nitesh



Re: [virtio-dev] Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-02 Thread Nitesh Narayan Lal


On 10/1/19 4:25 PM, Alexander Duyck wrote:
> On Tue, 2019-10-01 at 15:16 -0400, Nitesh Narayan Lal wrote:
>> On 10/1/19 12:21 PM, Alexander Duyck wrote:
>>> On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
 On 01.10.19 17:29, Alexander Duyck wrote:
> 
>
> As far as possible regressions I have focused on cases where performing
> the hinting would be non-optimal, such as cases where the code isn't
> needed as memory is not over-committed, or the functionality is not in
> use. I have been using the will-it-scale/page_fault1 test running with 16
> vcpus and have modified it to use Transparent Huge Pages. With this I see
> almost no difference with the patches applied and the feature disabled.
> Likewise I see almost no difference with the feature enabled, but the
> madvise disabled in the hypervisor due to a device being assigned. With
> the feature fully enabled in both guest and hypervisor I see a regression
> between -1.86% and -8.84% versus the baseline. I found that most of the
> overhead was due to the page faulting/zeroing that comes as a result of
> the pages having been evicted from the guest.
 I think Michal asked for a performance comparison against Nitesh's
 approach, to evaluate if keeping the reported state + tracking inside
 the buddy is really worth it. Do you have any such numbers already? (or
 did my tired eyes miss them in this cover letter? :/)

>>> I thought what Michal was asking for was what was the benefit of using the
>>> boundary pointer. I added a bit up above and to the description for patch
>>> 3 as on a 32G VM it adds up to about a 18% difference without factoring in
>>> the page faulting and zeroing logic that occurs when we actually do the
>>> madvise.
>>>
>>> Do we have a working patch set for Nitesh's code? The last time I tried
>>> running his patch set I ran into issues with kernel panics. If we have a
>>> known working/stable patch set I can give it a try.
>> Did you try the v12 patch-set [1]?
>> I remember that you reported the CPU stall issue, which I fixed in the v12.
>>
>> [1] https://lkml.org/lkml/2019/8/12/593
>>
>>> - Alex
>>>
> I haven't tested it. I will pull the patches and give it a try. It works
> with the same QEMU changes that mine does right? If so we should be able
> to get an apples-to-apples comparison.

Yes.

>
> Also, instead of providing lkml.org links to your patches in the future it
> might be better to provide a link to the lore.kernel.org version of the
> thread. So for example the v12 set would be:
> https://lore.kernel.org/lkml/20190812131235.27244-1-nit...@redhat.com/

I see, I will keep that in mind. Thanks for pointing this out.

>
> The advantage is you can just look up the message ID in your own inbox to
> figure out the link, and it provides raw access to the email if needed.
>
> Thanks.
>
> - Alex
>
>
> -
> To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
>
-- 
Thanks
Nitesh


Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-02 Thread Nitesh Narayan Lal


On 10/2/19 10:25 AM, Alexander Duyck wrote:
> On Wed, Oct 2, 2019 at 3:37 AM Nitesh Narayan Lal  wrote:
>>
>> On 10/1/19 8:55 PM, Alexander Duyck wrote:
>>> On Tue, Oct 1, 2019 at 12:16 PM Nitesh Narayan Lal  
>>> wrote:
 On 10/1/19 12:21 PM, Alexander Duyck wrote:
> On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
>> On 01.10.19 17:29, Alexander Duyck wrote:
> 
>
> Do we have a working patch set for Nitesh's code? The last time I tried
> running his patch set I ran into issues with kernel panics. If we have a
> known working/stable patch set I can give it a try.
 Did you try the v12 patch-set [1]?
 I remember that you reported the CPU stall issue, which I fixed in the v12.

 [1] https://lkml.org/lkml/2019/8/12/593
>>> So I tried testing with the spin_lock calls replaced with spin_lock
>>> _irq to resolve the IRQ issue. I also had shuffle enabled in order to
>>> increase the number of pages being dirtied.
>>>
>>> With that setup the bitmap approach is running significantly worse
>>> then my approach, even with the boundary removed. Since I had to
>>> modify the code to even getting working I am not comfortable posting
>>> numbers.
>> I didn't face any issue in getting the code work or compile.
>> Before my v12 posting, I did try your previously suggested test
>> (will-it-scale/page_fault1 for 12 hours on a 60 GB) and didn't see any 
>> issues.
>> I think it would help more if you can share the setup which you are running.
> So one issue with the standard page_fault1 is that it is only
> operating at the 4K page level. You won't see much impact from you
> patches with that as the overhead of splitting a MAX_ORDER - 2 page
> down to a 4K page will end up being the biggest thing you are
> benchmarking.
>
> I think I have brought it up before but I am running with the
> page_fault1 modified to use THP. Making the change is pretty
> straightforward as  all you have to do is add an madvise to the test
> code. All that is needed is to add "madvise(c, MEMSIZE,
> MADV_HUGEPAGE);" between the assert and the for loop in the
> page_fault1 code and then rebuild the test. I actually copied
> page_fault1.c into a file I named page_fault4.c and added the line. As
> a result it seems like the code will build it as an additional test.

Thanks for explaining.

>
> The only other alteration I can think of that might have much impact
> would be to enable the page shuffling. The idea is that it will cause
> us to use more pages because half of the pages freed are dumped to the
> tail of the list so we are constantly churning the memory.
>
>>> My suggestion would be to look at reworking the patch set and
>>> post numbers for my patch set versus the bitmap approach and we can
>>> look at them then.
>> Agreed. However, in order to fix an issue I have to reproduce it first.
> With the tweak I have suggested above it should make it much easier to
> reproduce. Basically all you need is to have the allocation competing
> against hinting. Currently the hinting isn't doing this because the
> allocations are mostly coming out of 4K pages instead of higher order
> ones.

Understood.

>
> Alternatively you could just make the suggestion I had proposed about
> using spin_lock/unlock_irq in your worker thread and that resolved it
> for me.

I will first reproduce as you suggested and then make the change.
That will help me to understand the issue in a better way.

>
>>>  I would prefer not to spend my time fixing and
>>> tuning a patch set that I am still not convinced is viable.
>> You  don't have to, I can fix the issues in my patch-set. :)
> Sounds good. Hopefully the stuff I pointed out above helps you to get
> a reproduction and resolve the issues.

Indeed, I will try these suggestions and fix this issue.
Did you run into any other issues while building or running?

>
> - Alex

-- 
Thanks
Nitesh



Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-02 Thread Alexander Duyck
On Wed, Oct 2, 2019 at 3:37 AM Nitesh Narayan Lal  wrote:
>
>
> On 10/1/19 8:55 PM, Alexander Duyck wrote:
> > On Tue, Oct 1, 2019 at 12:16 PM Nitesh Narayan Lal  
> > wrote:
> >>
> >> On 10/1/19 12:21 PM, Alexander Duyck wrote:
> >>> On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
>  On 01.10.19 17:29, Alexander Duyck wrote:



> >>> Do we have a working patch set for Nitesh's code? The last time I tried
> >>> running his patch set I ran into issues with kernel panics. If we have a
> >>> known working/stable patch set I can give it a try.
> >> Did you try the v12 patch-set [1]?
> >> I remember that you reported the CPU stall issue, which I fixed in the v12.
> >>
> >> [1] https://lkml.org/lkml/2019/8/12/593
> > So I tried testing with the spin_lock calls replaced with spin_lock
> > _irq to resolve the IRQ issue. I also had shuffle enabled in order to
> > increase the number of pages being dirtied.
> >
> > With that setup the bitmap approach is running significantly worse
> > then my approach, even with the boundary removed. Since I had to
> > modify the code to even getting working I am not comfortable posting
> > numbers.
>
> I didn't face any issue in getting the code work or compile.
> Before my v12 posting, I did try your previously suggested test
> (will-it-scale/page_fault1 for 12 hours on a 60 GB) and didn't see any issues.
> I think it would help more if you can share the setup which you are running.

So one issue with the standard page_fault1 is that it is only
operating at the 4K page level. You won't see much impact from you
patches with that as the overhead of splitting a MAX_ORDER - 2 page
down to a 4K page will end up being the biggest thing you are
benchmarking.

I think I have brought it up before but I am running with the
page_fault1 modified to use THP. Making the change is pretty
straightforward as  all you have to do is add an madvise to the test
code. All that is needed is to add "madvise(c, MEMSIZE,
MADV_HUGEPAGE);" between the assert and the for loop in the
page_fault1 code and then rebuild the test. I actually copied
page_fault1.c into a file I named page_fault4.c and added the line. As
a result it seems like the code will build it as an additional test.

The only other alteration I can think of that might have much impact
would be to enable the page shuffling. The idea is that it will cause
us to use more pages because half of the pages freed are dumped to the
tail of the list so we are constantly churning the memory.

> > My suggestion would be to look at reworking the patch set and
> > post numbers for my patch set versus the bitmap approach and we can
> > look at them then.
>
> Agreed. However, in order to fix an issue I have to reproduce it first.

With the tweak I have suggested above it should make it much easier to
reproduce. Basically all you need is to have the allocation competing
against hinting. Currently the hinting isn't doing this because the
allocations are mostly coming out of 4K pages instead of higher order
ones.

Alternatively you could just make the suggestion I had proposed about
using spin_lock/unlock_irq in your worker thread and that resolved it
for me.

> >  I would prefer not to spend my time fixing and
> > tuning a patch set that I am still not convinced is viable.
>
> You  don't have to, I can fix the issues in my patch-set. :)

Sounds good. Hopefully the stuff I pointed out above helps you to get
a reproduction and resolve the issues.

- Alex


Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-02 Thread Nitesh Narayan Lal


On 10/2/19 3:13 AM, David Hildenbrand wrote:
> On 02.10.19 02:55, Alexander Duyck wrote:
>> On Tue, Oct 1, 2019 at 12:16 PM Nitesh Narayan Lal  wrote:
>>>
>>> On 10/1/19 12:21 PM, Alexander Duyck wrote:
 On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
> On 01.10.19 17:29, Alexander Duyck wrote:
>> This series provides an asynchronous means of reporting to a hypervisor
>> that a guest page is no longer in use and can have the data associated
>> with it dropped. To do this I have implemented functionality that allows
>> for what I am referring to as unused page reporting. The advantage of
>> unused page reporting is that we can support a significant amount of
>> memory over-commit with improved performance as we can avoid having to
>> write/read memory from swap as the VM will instead actively participate
>> in freeing unused memory so it doesn't have to be written.
>>
>> The functionality for this is fairly simple. When enabled it will 
>> allocate
>> statistics to track the number of reported pages in a given free area.
>> When the number of free pages exceeds this value plus a high water value,
>> currently 32, it will begin performing page reporting which consists of
>> pulling non-reported pages off of the free lists of a given zone and
>> placing them into a scatterlist. The scatterlist is then given to the 
>> page
>> reporting device and it will perform the required action to make the 
>> pages
>> "reported", in the case of virtio-balloon this results in the pages being
>> madvised as MADV_DONTNEED. After this they are placed back on their
>> original free list. If they are not merged in freeing an additional bit 
>> is
>> set indicating that they are a "reported" buddy page instead of a 
>> standard
>> buddy page. The cycle then repeats with additional non-reported pages
>> being pulled until the free areas all consist of reported pages.
>>
>> In order to try and keep the time needed to find a non-reported page to
>> a minimum we maintain a "reported_boundary" pointer. This pointer is used
>> by the get_unreported_pages iterator to determine at what point it should
>> resume searching for non-reported pages. In order to guarantee pages do
>> not get past the scan I have modified add_to_free_list_tail so that it
>> will not insert pages behind the reported_boundary. Doing this allows us
>> to keep the overhead to a minimum as re-walking the list without the
>> boundary will result in as much as 18% additional overhead on a 32G VM.
>>
>>
 

>> As far as possible regressions I have focused on cases where performing
>> the hinting would be non-optimal, such as cases where the code isn't
>> needed as memory is not over-committed, or the functionality is not in
>> use. I have been using the will-it-scale/page_fault1 test running with 16
>> vcpus and have modified it to use Transparent Huge Pages. With this I see
>> almost no difference with the patches applied and the feature disabled.
>> Likewise I see almost no difference with the feature enabled, but the
>> madvise disabled in the hypervisor due to a device being assigned. With
>> the feature fully enabled in both guest and hypervisor I see a regression
>> between -1.86% and -8.84% versus the baseline. I found that most of the
>> overhead was due to the page faulting/zeroing that comes as a result of
>> the pages having been evicted from the guest.
> I think Michal asked for a performance comparison against Nitesh's
> approach, to evaluate if keeping the reported state + tracking inside
> the buddy is really worth it. Do you have any such numbers already? (or
> did my tired eyes miss them in this cover letter? :/)
>
 I thought what Michal was asking for was what was the benefit of using the
 boundary pointer. I added a bit up above and to the description for patch
 3 as on a 32G VM it adds up to about a 18% difference without factoring in
 the page faulting and zeroing logic that occurs when we actually do the
 madvise.

 Do we have a working patch set for Nitesh's code? The last time I tried
 running his patch set I ran into issues with kernel panics. If we have a
 known working/stable patch set I can give it a try.
>>> Did you try the v12 patch-set [1]?
>>> I remember that you reported the CPU stall issue, which I fixed in the v12.
>>>
>>> [1] https://lkml.org/lkml/2019/8/12/593
>> So I tried testing with the spin_lock calls replaced with spin_lock
>> _irq to resolve the IRQ issue. I also had shuffle enabled in order to
>> increase the number of pages being dirtied.
>>
>> With that setup the bitmap approach is running significantly worse
>> then my approach, even with the boundary removed. Since I had to
> It would make sense to share the setup+benchmark+performance 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-02 Thread Nitesh Narayan Lal


On 10/1/19 8:55 PM, Alexander Duyck wrote:
> On Tue, Oct 1, 2019 at 12:16 PM Nitesh Narayan Lal  wrote:
>>
>> On 10/1/19 12:21 PM, Alexander Duyck wrote:
>>> On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
 On 01.10.19 17:29, Alexander Duyck wrote:
> This series provides an asynchronous means of reporting to a hypervisor
> that a guest page is no longer in use and can have the data associated
> with it dropped. To do this I have implemented functionality that allows
> for what I am referring to as unused page reporting. The advantage of
> unused page reporting is that we can support a significant amount of
> memory over-commit with improved performance as we can avoid having to
> write/read memory from swap as the VM will instead actively participate
> in freeing unused memory so it doesn't have to be written.
>
> The functionality for this is fairly simple. When enabled it will allocate
> statistics to track the number of reported pages in a given free area.
> When the number of free pages exceeds this value plus a high water value,
> currently 32, it will begin performing page reporting which consists of
> pulling non-reported pages off of the free lists of a given zone and
> placing them into a scatterlist. The scatterlist is then given to the page
> reporting device and it will perform the required action to make the pages
> "reported", in the case of virtio-balloon this results in the pages being
> madvised as MADV_DONTNEED. After this they are placed back on their
> original free list. If they are not merged in freeing an additional bit is
> set indicating that they are a "reported" buddy page instead of a standard
> buddy page. The cycle then repeats with additional non-reported pages
> being pulled until the free areas all consist of reported pages.
>
> In order to try and keep the time needed to find a non-reported page to
> a minimum we maintain a "reported_boundary" pointer. This pointer is used
> by the get_unreported_pages iterator to determine at what point it should
> resume searching for non-reported pages. In order to guarantee pages do
> not get past the scan I have modified add_to_free_list_tail so that it
> will not insert pages behind the reported_boundary. Doing this allows us
> to keep the overhead to a minimum as re-walking the list without the
> boundary will result in as much as 18% additional overhead on a 32G VM.
>
>
>>> 
>>>
> As far as possible regressions I have focused on cases where performing
> the hinting would be non-optimal, such as cases where the code isn't
> needed as memory is not over-committed, or the functionality is not in
> use. I have been using the will-it-scale/page_fault1 test running with 16
> vcpus and have modified it to use Transparent Huge Pages. With this I see
> almost no difference with the patches applied and the feature disabled.
> Likewise I see almost no difference with the feature enabled, but the
> madvise disabled in the hypervisor due to a device being assigned. With
> the feature fully enabled in both guest and hypervisor I see a regression
> between -1.86% and -8.84% versus the baseline. I found that most of the
> overhead was due to the page faulting/zeroing that comes as a result of
> the pages having been evicted from the guest.
 I think Michal asked for a performance comparison against Nitesh's
 approach, to evaluate if keeping the reported state + tracking inside
 the buddy is really worth it. Do you have any such numbers already? (or
 did my tired eyes miss them in this cover letter? :/)

>>> I thought what Michal was asking for was what was the benefit of using the
>>> boundary pointer. I added a bit up above and to the description for patch
>>> 3 as on a 32G VM it adds up to about a 18% difference without factoring in
>>> the page faulting and zeroing logic that occurs when we actually do the
>>> madvise.
>>>
>>> Do we have a working patch set for Nitesh's code? The last time I tried
>>> running his patch set I ran into issues with kernel panics. If we have a
>>> known working/stable patch set I can give it a try.
>> Did you try the v12 patch-set [1]?
>> I remember that you reported the CPU stall issue, which I fixed in the v12.
>>
>> [1] https://lkml.org/lkml/2019/8/12/593
> So I tried testing with the spin_lock calls replaced with spin_lock
> _irq to resolve the IRQ issue. I also had shuffle enabled in order to
> increase the number of pages being dirtied.
>
> With that setup the bitmap approach is running significantly worse
> then my approach, even with the boundary removed. Since I had to
> modify the code to even getting working I am not comfortable posting
> numbers. 

I didn't face any issue in getting the code work or compile.
Before my v12 posting, I did try your previously suggested test

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-02 Thread David Hildenbrand
On 02.10.19 02:55, Alexander Duyck wrote:
> On Tue, Oct 1, 2019 at 12:16 PM Nitesh Narayan Lal  wrote:
>>
>>
>> On 10/1/19 12:21 PM, Alexander Duyck wrote:
>>> On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
 On 01.10.19 17:29, Alexander Duyck wrote:
> This series provides an asynchronous means of reporting to a hypervisor
> that a guest page is no longer in use and can have the data associated
> with it dropped. To do this I have implemented functionality that allows
> for what I am referring to as unused page reporting. The advantage of
> unused page reporting is that we can support a significant amount of
> memory over-commit with improved performance as we can avoid having to
> write/read memory from swap as the VM will instead actively participate
> in freeing unused memory so it doesn't have to be written.
>
> The functionality for this is fairly simple. When enabled it will allocate
> statistics to track the number of reported pages in a given free area.
> When the number of free pages exceeds this value plus a high water value,
> currently 32, it will begin performing page reporting which consists of
> pulling non-reported pages off of the free lists of a given zone and
> placing them into a scatterlist. The scatterlist is then given to the page
> reporting device and it will perform the required action to make the pages
> "reported", in the case of virtio-balloon this results in the pages being
> madvised as MADV_DONTNEED. After this they are placed back on their
> original free list. If they are not merged in freeing an additional bit is
> set indicating that they are a "reported" buddy page instead of a standard
> buddy page. The cycle then repeats with additional non-reported pages
> being pulled until the free areas all consist of reported pages.
>
> In order to try and keep the time needed to find a non-reported page to
> a minimum we maintain a "reported_boundary" pointer. This pointer is used
> by the get_unreported_pages iterator to determine at what point it should
> resume searching for non-reported pages. In order to guarantee pages do
> not get past the scan I have modified add_to_free_list_tail so that it
> will not insert pages behind the reported_boundary. Doing this allows us
> to keep the overhead to a minimum as re-walking the list without the
> boundary will result in as much as 18% additional overhead on a 32G VM.
>
>
>>> 
>>>
> As far as possible regressions I have focused on cases where performing
> the hinting would be non-optimal, such as cases where the code isn't
> needed as memory is not over-committed, or the functionality is not in
> use. I have been using the will-it-scale/page_fault1 test running with 16
> vcpus and have modified it to use Transparent Huge Pages. With this I see
> almost no difference with the patches applied and the feature disabled.
> Likewise I see almost no difference with the feature enabled, but the
> madvise disabled in the hypervisor due to a device being assigned. With
> the feature fully enabled in both guest and hypervisor I see a regression
> between -1.86% and -8.84% versus the baseline. I found that most of the
> overhead was due to the page faulting/zeroing that comes as a result of
> the pages having been evicted from the guest.
 I think Michal asked for a performance comparison against Nitesh's
 approach, to evaluate if keeping the reported state + tracking inside
 the buddy is really worth it. Do you have any such numbers already? (or
 did my tired eyes miss them in this cover letter? :/)

>>> I thought what Michal was asking for was what was the benefit of using the
>>> boundary pointer. I added a bit up above and to the description for patch
>>> 3 as on a 32G VM it adds up to about a 18% difference without factoring in
>>> the page faulting and zeroing logic that occurs when we actually do the
>>> madvise.
>>>
>>> Do we have a working patch set for Nitesh's code? The last time I tried
>>> running his patch set I ran into issues with kernel panics. If we have a
>>> known working/stable patch set I can give it a try.
>>
>> Did you try the v12 patch-set [1]?
>> I remember that you reported the CPU stall issue, which I fixed in the v12.
>>
>> [1] https://lkml.org/lkml/2019/8/12/593
> 
> So I tried testing with the spin_lock calls replaced with spin_lock
> _irq to resolve the IRQ issue. I also had shuffle enabled in order to
> increase the number of pages being dirtied.
> 
> With that setup the bitmap approach is running significantly worse
> then my approach, even with the boundary removed. Since I had to

It would make sense to share the setup+benchmark+performance indication
that you measured. You don't have to share the actual numbers.

> modify the code to even getting working I am not comfortable posting
> numbers. 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread Alexander Duyck
On Tue, Oct 1, 2019 at 12:16 PM Nitesh Narayan Lal  wrote:
>
>
> On 10/1/19 12:21 PM, Alexander Duyck wrote:
> > On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
> >> On 01.10.19 17:29, Alexander Duyck wrote:
> >>> This series provides an asynchronous means of reporting to a hypervisor
> >>> that a guest page is no longer in use and can have the data associated
> >>> with it dropped. To do this I have implemented functionality that allows
> >>> for what I am referring to as unused page reporting. The advantage of
> >>> unused page reporting is that we can support a significant amount of
> >>> memory over-commit with improved performance as we can avoid having to
> >>> write/read memory from swap as the VM will instead actively participate
> >>> in freeing unused memory so it doesn't have to be written.
> >>>
> >>> The functionality for this is fairly simple. When enabled it will allocate
> >>> statistics to track the number of reported pages in a given free area.
> >>> When the number of free pages exceeds this value plus a high water value,
> >>> currently 32, it will begin performing page reporting which consists of
> >>> pulling non-reported pages off of the free lists of a given zone and
> >>> placing them into a scatterlist. The scatterlist is then given to the page
> >>> reporting device and it will perform the required action to make the pages
> >>> "reported", in the case of virtio-balloon this results in the pages being
> >>> madvised as MADV_DONTNEED. After this they are placed back on their
> >>> original free list. If they are not merged in freeing an additional bit is
> >>> set indicating that they are a "reported" buddy page instead of a standard
> >>> buddy page. The cycle then repeats with additional non-reported pages
> >>> being pulled until the free areas all consist of reported pages.
> >>>
> >>> In order to try and keep the time needed to find a non-reported page to
> >>> a minimum we maintain a "reported_boundary" pointer. This pointer is used
> >>> by the get_unreported_pages iterator to determine at what point it should
> >>> resume searching for non-reported pages. In order to guarantee pages do
> >>> not get past the scan I have modified add_to_free_list_tail so that it
> >>> will not insert pages behind the reported_boundary. Doing this allows us
> >>> to keep the overhead to a minimum as re-walking the list without the
> >>> boundary will result in as much as 18% additional overhead on a 32G VM.
> >>>
> >>>
> > 
> >
> >>> As far as possible regressions I have focused on cases where performing
> >>> the hinting would be non-optimal, such as cases where the code isn't
> >>> needed as memory is not over-committed, or the functionality is not in
> >>> use. I have been using the will-it-scale/page_fault1 test running with 16
> >>> vcpus and have modified it to use Transparent Huge Pages. With this I see
> >>> almost no difference with the patches applied and the feature disabled.
> >>> Likewise I see almost no difference with the feature enabled, but the
> >>> madvise disabled in the hypervisor due to a device being assigned. With
> >>> the feature fully enabled in both guest and hypervisor I see a regression
> >>> between -1.86% and -8.84% versus the baseline. I found that most of the
> >>> overhead was due to the page faulting/zeroing that comes as a result of
> >>> the pages having been evicted from the guest.
> >> I think Michal asked for a performance comparison against Nitesh's
> >> approach, to evaluate if keeping the reported state + tracking inside
> >> the buddy is really worth it. Do you have any such numbers already? (or
> >> did my tired eyes miss them in this cover letter? :/)
> >>
> > I thought what Michal was asking for was what was the benefit of using the
> > boundary pointer. I added a bit up above and to the description for patch
> > 3 as on a 32G VM it adds up to about a 18% difference without factoring in
> > the page faulting and zeroing logic that occurs when we actually do the
> > madvise.
> >
> > Do we have a working patch set for Nitesh's code? The last time I tried
> > running his patch set I ran into issues with kernel panics. If we have a
> > known working/stable patch set I can give it a try.
>
> Did you try the v12 patch-set [1]?
> I remember that you reported the CPU stall issue, which I fixed in the v12.
>
> [1] https://lkml.org/lkml/2019/8/12/593

So I tried testing with the spin_lock calls replaced with spin_lock
_irq to resolve the IRQ issue. I also had shuffle enabled in order to
increase the number of pages being dirtied.

With that setup the bitmap approach is running significantly worse
then my approach, even with the boundary removed. Since I had to
modify the code to even getting working I am not comfortable posting
numbers. My suggestion would be to look at reworking the patch set and
post numbers for my patch set versus the bitmap approach and we can
look at them then. I would prefer not to spend my time fixing and
tuning a 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread Dave Hansen
On 10/1/19 1:49 PM, Alexander Duyck wrote:
> So it looks like v12 still has issues. I'm pretty sure you should be using
> spin_lock_irq(), not spin_lock() in page_reporting.c to avoid the
> possibility of an IRQ firing and causing lock recursion on the zone lock.

Lockdep should make all of this a lot easier to find.  Is it being used?


Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread Alexander Duyck
On Tue, 2019-10-01 at 13:25 -0700, Alexander Duyck wrote:
> On Tue, 2019-10-01 at 15:16 -0400, Nitesh Narayan Lal wrote:
> > On 10/1/19 12:21 PM, Alexander Duyck wrote:
> > > On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
> > > > On 01.10.19 17:29, Alexander Duyck wrote:
> 
> 
> 
> > > > > As far as possible regressions I have focused on cases where 
> > > > > performing
> > > > > the hinting would be non-optimal, such as cases where the code isn't
> > > > > needed as memory is not over-committed, or the functionality is not in
> > > > > use. I have been using the will-it-scale/page_fault1 test running 
> > > > > with 16
> > > > > vcpus and have modified it to use Transparent Huge Pages. With this I 
> > > > > see
> > > > > almost no difference with the patches applied and the feature 
> > > > > disabled.
> > > > > Likewise I see almost no difference with the feature enabled, but the
> > > > > madvise disabled in the hypervisor due to a device being assigned. 
> > > > > With
> > > > > the feature fully enabled in both guest and hypervisor I see a 
> > > > > regression
> > > > > between -1.86% and -8.84% versus the baseline. I found that most of 
> > > > > the
> > > > > overhead was due to the page faulting/zeroing that comes as a result 
> > > > > of
> > > > > the pages having been evicted from the guest.
> > > > I think Michal asked for a performance comparison against Nitesh's
> > > > approach, to evaluate if keeping the reported state + tracking inside
> > > > the buddy is really worth it. Do you have any such numbers already? (or
> > > > did my tired eyes miss them in this cover letter? :/)
> > > > 
> > > I thought what Michal was asking for was what was the benefit of using the
> > > boundary pointer. I added a bit up above and to the description for patch
> > > 3 as on a 32G VM it adds up to about a 18% difference without factoring in
> > > the page faulting and zeroing logic that occurs when we actually do the
> > > madvise.
> > > 
> > > Do we have a working patch set for Nitesh's code? The last time I tried
> > > running his patch set I ran into issues with kernel panics. If we have a
> > > known working/stable patch set I can give it a try.
> > 
> > Did you try the v12 patch-set [1]?
> > I remember that you reported the CPU stall issue, which I fixed in the v12.
> > 
> > [1] https://lkml.org/lkml/2019/8/12/593
> > 
> > > - Alex
> > > 
> 
> I haven't tested it. I will pull the patches and give it a try. It works
> with the same QEMU changes that mine does right? If so we should be able
> to get an apples-to-apples comparison.
> 
> Also, instead of providing lkml.org links to your patches in the future it
> might be better to provide a link to the lore.kernel.org version of the
> thread. So for example the v12 set would be:
> https://lore.kernel.org/lkml/20190812131235.27244-1-nit...@redhat.com/
> 
> The advantage is you can just look up the message ID in your own inbox to
> figure out the link, and it provides raw access to the email if needed.
> 
> Thanks.
> 
> - Alex

So it looks like v12 still has issues. I'm pretty sure you should be using
spin_lock_irq(), not spin_lock() in page_reporting.c to avoid the
possibility of an IRQ firing and causing lock recursion on the zone lock.

I'm trying to work around it now, but it needs to be addressed for future
versions.

Here is the lock-up my guest reported.

[  127.869086] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[  127.872219] rcu: 0-...0: (0 ticks this GP) idle=94e/1/0x4002 
softirq=5354/5354 fqs=15000 
[  127.874915] rcu: 1-...0: (0 ticks this GP) idle=3b6/1/0x4000 
softirq=3359/3359 fqs=15000 
[  127.877616]  (detected by 2, t=60004 jiffies, g=8153, q=8)
[  127.879229] Sending NMI from CPU 2 to CPUs 0:
[  127.881523] NMI backtrace for cpu 0
[  127.881524] CPU: 0 PID: 658 Comm: kworker/0:6 Not tainted 
5.3.0-next-20190930nshuffle+ #2
[  127.881524] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Bochs 01/01/2011
[  127.881525] Workqueue: events page_reporting_wq
[  127.881526] RIP: 0010:queued_spin_lock_slowpath+0x21/0x1f0
[  127.881526] Code: c0 75 ec c3 90 90 90 90 90 0f 1f 44 00 00 0f 1f 44 00 00 
ba 01 00 00 00 8b 07 85 c0 75 0a f0 0f b1 17 85 c0 75 f2 f3 c3 f3 90  ec 81 
fe 00 01 00 00 0f 84 44 01 00 00 81 e6 00 ff ff ff 75 3e
[  127.881527] RSP: 0018:b77480003df0 EFLAGS: 0002
[  127.881527] RAX: 0001 RBX: 0001 RCX: dead0122
[  127.881528] RDX: 0001 RSI: 0001 RDI: 992a3fffd240
[  127.881528] RBP: 0006 R08:  R09: dd9c508cf948
[  127.881528] R10:  R11:  R12: 992a3fffcd00
[  127.881529] R13: dd9c508cf900 R14: 992a2fa2e380 R15: 0001
[  127.881529] FS:  () GS:992a2fa0() 
knlGS:
[  127.881529] CS:  0010 DS:  ES:  CR0: 80050033
[  127.881530] CR2: 77c5 

Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread Alexander Duyck
On Tue, 2019-10-01 at 15:16 -0400, Nitesh Narayan Lal wrote:
> On 10/1/19 12:21 PM, Alexander Duyck wrote:
> > On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
> > > On 01.10.19 17:29, Alexander Duyck wrote:



> > > > 
> > > > As far as possible regressions I have focused on cases where performing
> > > > the hinting would be non-optimal, such as cases where the code isn't
> > > > needed as memory is not over-committed, or the functionality is not in
> > > > use. I have been using the will-it-scale/page_fault1 test running with 
> > > > 16
> > > > vcpus and have modified it to use Transparent Huge Pages. With this I 
> > > > see
> > > > almost no difference with the patches applied and the feature disabled.
> > > > Likewise I see almost no difference with the feature enabled, but the
> > > > madvise disabled in the hypervisor due to a device being assigned. With
> > > > the feature fully enabled in both guest and hypervisor I see a 
> > > > regression
> > > > between -1.86% and -8.84% versus the baseline. I found that most of the
> > > > overhead was due to the page faulting/zeroing that comes as a result of
> > > > the pages having been evicted from the guest.
> > > I think Michal asked for a performance comparison against Nitesh's
> > > approach, to evaluate if keeping the reported state + tracking inside
> > > the buddy is really worth it. Do you have any such numbers already? (or
> > > did my tired eyes miss them in this cover letter? :/)
> > > 
> > I thought what Michal was asking for was what was the benefit of using the
> > boundary pointer. I added a bit up above and to the description for patch
> > 3 as on a 32G VM it adds up to about a 18% difference without factoring in
> > the page faulting and zeroing logic that occurs when we actually do the
> > madvise.
> > 
> > Do we have a working patch set for Nitesh's code? The last time I tried
> > running his patch set I ran into issues with kernel panics. If we have a
> > known working/stable patch set I can give it a try.
> 
> Did you try the v12 patch-set [1]?
> I remember that you reported the CPU stall issue, which I fixed in the v12.
> 
> [1] https://lkml.org/lkml/2019/8/12/593
> 
> > - Alex
> > 

I haven't tested it. I will pull the patches and give it a try. It works
with the same QEMU changes that mine does right? If so we should be able
to get an apples-to-apples comparison.

Also, instead of providing lkml.org links to your patches in the future it
might be better to provide a link to the lore.kernel.org version of the
thread. So for example the v12 set would be:
https://lore.kernel.org/lkml/20190812131235.27244-1-nit...@redhat.com/

The advantage is you can just look up the message ID in your own inbox to
figure out the link, and it provides raw access to the email if needed.

Thanks.

- Alex



Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread Nitesh Narayan Lal


On 10/1/19 2:41 PM, David Hildenbrand wrote:
>>> I think Michal asked for a performance comparison against Nitesh's
>>> approach, to evaluate if keeping the reported state + tracking inside
>>> the buddy is really worth it. Do you have any such numbers already? (or
>>> did my tired eyes miss them in this cover letter? :/)
>>>
>> I thought what Michal was asking for was what was the benefit of using the
>> boundary pointer. I added a bit up above and to the description for patch
>> 3 as on a 32G VM it adds up to about a 18% difference without factoring in
>> the page faulting and zeroing logic that occurs when we actually do the
>> madvise.
> "I would still be happier if the allocator wouldn't really have to
> bother about somebody snooping its internal state to do its own thing.
> So make sure to describe why and how much this really matters.
> [...]
> if you gave some rough numbers to quantify how much overhead for
> different solutions we are talking about here.
> "
>
> Could be that I'm misreading Michals comment, but I'd be interested in
> the "how much" as well.
>
>> Do we have a working patch set for Nitesh's code? The last time I tried
>> running his patch set I ran into issues with kernel panics. If we have a
>> known working/stable patch set I can give it a try.
> @Nitesh, is there a working branch?

For some unknown reason, I received these set of emails just now :)
That's why couldn't respond earlier.

>
>
-- 
Thanks
Nitesh


Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread Nitesh Narayan Lal


On 10/1/19 12:21 PM, Alexander Duyck wrote:
> On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
>> On 01.10.19 17:29, Alexander Duyck wrote:
>>> This series provides an asynchronous means of reporting to a hypervisor
>>> that a guest page is no longer in use and can have the data associated
>>> with it dropped. To do this I have implemented functionality that allows
>>> for what I am referring to as unused page reporting. The advantage of
>>> unused page reporting is that we can support a significant amount of
>>> memory over-commit with improved performance as we can avoid having to
>>> write/read memory from swap as the VM will instead actively participate
>>> in freeing unused memory so it doesn't have to be written.
>>>
>>> The functionality for this is fairly simple. When enabled it will allocate
>>> statistics to track the number of reported pages in a given free area.
>>> When the number of free pages exceeds this value plus a high water value,
>>> currently 32, it will begin performing page reporting which consists of
>>> pulling non-reported pages off of the free lists of a given zone and
>>> placing them into a scatterlist. The scatterlist is then given to the page
>>> reporting device and it will perform the required action to make the pages
>>> "reported", in the case of virtio-balloon this results in the pages being
>>> madvised as MADV_DONTNEED. After this they are placed back on their
>>> original free list. If they are not merged in freeing an additional bit is
>>> set indicating that they are a "reported" buddy page instead of a standard
>>> buddy page. The cycle then repeats with additional non-reported pages
>>> being pulled until the free areas all consist of reported pages.
>>>
>>> In order to try and keep the time needed to find a non-reported page to
>>> a minimum we maintain a "reported_boundary" pointer. This pointer is used
>>> by the get_unreported_pages iterator to determine at what point it should
>>> resume searching for non-reported pages. In order to guarantee pages do
>>> not get past the scan I have modified add_to_free_list_tail so that it
>>> will not insert pages behind the reported_boundary. Doing this allows us
>>> to keep the overhead to a minimum as re-walking the list without the
>>> boundary will result in as much as 18% additional overhead on a 32G VM.
>>>
>>>
> 
>
>>> As far as possible regressions I have focused on cases where performing
>>> the hinting would be non-optimal, such as cases where the code isn't
>>> needed as memory is not over-committed, or the functionality is not in
>>> use. I have been using the will-it-scale/page_fault1 test running with 16
>>> vcpus and have modified it to use Transparent Huge Pages. With this I see
>>> almost no difference with the patches applied and the feature disabled.
>>> Likewise I see almost no difference with the feature enabled, but the
>>> madvise disabled in the hypervisor due to a device being assigned. With
>>> the feature fully enabled in both guest and hypervisor I see a regression
>>> between -1.86% and -8.84% versus the baseline. I found that most of the
>>> overhead was due to the page faulting/zeroing that comes as a result of
>>> the pages having been evicted from the guest.
>> I think Michal asked for a performance comparison against Nitesh's
>> approach, to evaluate if keeping the reported state + tracking inside
>> the buddy is really worth it. Do you have any such numbers already? (or
>> did my tired eyes miss them in this cover letter? :/)
>>
> I thought what Michal was asking for was what was the benefit of using the
> boundary pointer. I added a bit up above and to the description for patch
> 3 as on a 32G VM it adds up to about a 18% difference without factoring in
> the page faulting and zeroing logic that occurs when we actually do the
> madvise.
>
> Do we have a working patch set for Nitesh's code? The last time I tried
> running his patch set I ran into issues with kernel panics. If we have a
> known working/stable patch set I can give it a try.

Did you try the v12 patch-set [1]?
I remember that you reported the CPU stall issue, which I fixed in the v12.

[1] https://lkml.org/lkml/2019/8/12/593

>
> - Alex
>
-- 
Thanks
Nitesh


Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread Michael S. Tsirkin
On Tue, Oct 01, 2019 at 09:21:46AM -0700, Alexander Duyck wrote:
> I thought what Michal was asking for was what was the benefit of using the
> boundary pointer. I added a bit up above and to the description for patch
> 3 as on a 32G VM it adds up to about a 18% difference without factoring in
> the page faulting and zeroing logic that occurs when we actually do the
> madvise.

Something maybe worth adding to the log:

one disadvantage of the tight integration with the mm core is
that only a single reporting device is supported.
It's not obvious that more than one is useful though.

-- 
MST


Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread David Hildenbrand
>> I think Michal asked for a performance comparison against Nitesh's
>> approach, to evaluate if keeping the reported state + tracking inside
>> the buddy is really worth it. Do you have any such numbers already? (or
>> did my tired eyes miss them in this cover letter? :/)
>>
> 
> I thought what Michal was asking for was what was the benefit of using the
> boundary pointer. I added a bit up above and to the description for patch
> 3 as on a 32G VM it adds up to about a 18% difference without factoring in
> the page faulting and zeroing logic that occurs when we actually do the
> madvise.

"I would still be happier if the allocator wouldn't really have to
bother about somebody snooping its internal state to do its own thing.
So make sure to describe why and how much this really matters.
[...]
if you gave some rough numbers to quantify how much overhead for
different solutions we are talking about here.
"

Could be that I'm misreading Michals comment, but I'd be interested in
the "how much" as well.

> 
> Do we have a working patch set for Nitesh's code? The last time I tried
> running his patch set I ran into issues with kernel panics. If we have a
> known working/stable patch set I can give it a try.

@Nitesh, is there a working branch?


-- 

Thanks,

David / dhildenb


Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread Alexander Duyck
On Tue, 2019-10-01 at 17:35 +0200, David Hildenbrand wrote:
> On 01.10.19 17:29, Alexander Duyck wrote:
> > This series provides an asynchronous means of reporting to a hypervisor
> > that a guest page is no longer in use and can have the data associated
> > with it dropped. To do this I have implemented functionality that allows
> > for what I am referring to as unused page reporting. The advantage of
> > unused page reporting is that we can support a significant amount of
> > memory over-commit with improved performance as we can avoid having to
> > write/read memory from swap as the VM will instead actively participate
> > in freeing unused memory so it doesn't have to be written.
> > 
> > The functionality for this is fairly simple. When enabled it will allocate
> > statistics to track the number of reported pages in a given free area.
> > When the number of free pages exceeds this value plus a high water value,
> > currently 32, it will begin performing page reporting which consists of
> > pulling non-reported pages off of the free lists of a given zone and
> > placing them into a scatterlist. The scatterlist is then given to the page
> > reporting device and it will perform the required action to make the pages
> > "reported", in the case of virtio-balloon this results in the pages being
> > madvised as MADV_DONTNEED. After this they are placed back on their
> > original free list. If they are not merged in freeing an additional bit is
> > set indicating that they are a "reported" buddy page instead of a standard
> > buddy page. The cycle then repeats with additional non-reported pages
> > being pulled until the free areas all consist of reported pages.
> > 
> > In order to try and keep the time needed to find a non-reported page to
> > a minimum we maintain a "reported_boundary" pointer. This pointer is used
> > by the get_unreported_pages iterator to determine at what point it should
> > resume searching for non-reported pages. In order to guarantee pages do
> > not get past the scan I have modified add_to_free_list_tail so that it
> > will not insert pages behind the reported_boundary. Doing this allows us
> > to keep the overhead to a minimum as re-walking the list without the
> > boundary will result in as much as 18% additional overhead on a 32G VM.
> > 
> > 



> > As far as possible regressions I have focused on cases where performing
> > the hinting would be non-optimal, such as cases where the code isn't
> > needed as memory is not over-committed, or the functionality is not in
> > use. I have been using the will-it-scale/page_fault1 test running with 16
> > vcpus and have modified it to use Transparent Huge Pages. With this I see
> > almost no difference with the patches applied and the feature disabled.
> > Likewise I see almost no difference with the feature enabled, but the
> > madvise disabled in the hypervisor due to a device being assigned. With
> > the feature fully enabled in both guest and hypervisor I see a regression
> > between -1.86% and -8.84% versus the baseline. I found that most of the
> > overhead was due to the page faulting/zeroing that comes as a result of
> > the pages having been evicted from the guest.
> 
> I think Michal asked for a performance comparison against Nitesh's
> approach, to evaluate if keeping the reported state + tracking inside
> the buddy is really worth it. Do you have any such numbers already? (or
> did my tired eyes miss them in this cover letter? :/)
> 

I thought what Michal was asking for was what was the benefit of using the
boundary pointer. I added a bit up above and to the description for patch
3 as on a 32G VM it adds up to about a 18% difference without factoring in
the page faulting and zeroing logic that occurs when we actually do the
madvise.

Do we have a working patch set for Nitesh's code? The last time I tried
running his patch set I ran into issues with kernel panics. If we have a
known working/stable patch set I can give it a try.

- Alex



Re: [PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread David Hildenbrand
On 01.10.19 17:29, Alexander Duyck wrote:
> This series provides an asynchronous means of reporting to a hypervisor
> that a guest page is no longer in use and can have the data associated
> with it dropped. To do this I have implemented functionality that allows
> for what I am referring to as unused page reporting. The advantage of
> unused page reporting is that we can support a significant amount of
> memory over-commit with improved performance as we can avoid having to
> write/read memory from swap as the VM will instead actively participate
> in freeing unused memory so it doesn't have to be written.
> 
> The functionality for this is fairly simple. When enabled it will allocate
> statistics to track the number of reported pages in a given free area.
> When the number of free pages exceeds this value plus a high water value,
> currently 32, it will begin performing page reporting which consists of
> pulling non-reported pages off of the free lists of a given zone and
> placing them into a scatterlist. The scatterlist is then given to the page
> reporting device and it will perform the required action to make the pages
> "reported", in the case of virtio-balloon this results in the pages being
> madvised as MADV_DONTNEED. After this they are placed back on their
> original free list. If they are not merged in freeing an additional bit is
> set indicating that they are a "reported" buddy page instead of a standard
> buddy page. The cycle then repeats with additional non-reported pages
> being pulled until the free areas all consist of reported pages.
> 
> In order to try and keep the time needed to find a non-reported page to
> a minimum we maintain a "reported_boundary" pointer. This pointer is used
> by the get_unreported_pages iterator to determine at what point it should
> resume searching for non-reported pages. In order to guarantee pages do
> not get past the scan I have modified add_to_free_list_tail so that it
> will not insert pages behind the reported_boundary. Doing this allows us
> to keep the overhead to a minimum as re-walking the list without the
> boundary will result in as much as 18% additional overhead on a 32G VM.
> 
> If another process needs to perform a massive manipulation of the free
> list, such as compaction, it can either reset a given individual boundary
> which will push the boundary back to the list_head, or it can clear the
> bit indicating the zone is actively processing which will result in the
> reporting process resetting all of the boundaries for a given zone.
> 
> I am leaving a number of things hard-coded such as limiting the lowest
> order processed to pageblock_order, and have left it up to the guest to
> determine what the limit is on how many pages it wants to allocate to
> process the hints. The upper limit for this is based on the size of the
> queue used to store the scatterlist.
> 
> I wanted to avoid gaming the performance testing for this. As far as
> possible gain a significant performance improvement should be visible in
> cases where guests are forced to write/read from swap. As such, testing
> it would be more of a benchmark of copying a page from swap versus just
> allocating a zero page. I have been verifying that the memory is being
> freed using memhog to allocate all the memory on the guest, and then
> watching /proc/meminfo to verify the host sees the memory returned after
> the test completes.
> 
> As far as possible regressions I have focused on cases where performing
> the hinting would be non-optimal, such as cases where the code isn't
> needed as memory is not over-committed, or the functionality is not in
> use. I have been using the will-it-scale/page_fault1 test running with 16
> vcpus and have modified it to use Transparent Huge Pages. With this I see
> almost no difference with the patches applied and the feature disabled.
> Likewise I see almost no difference with the feature enabled, but the
> madvise disabled in the hypervisor due to a device being assigned. With
> the feature fully enabled in both guest and hypervisor I see a regression
> between -1.86% and -8.84% versus the baseline. I found that most of the
> overhead was due to the page faulting/zeroing that comes as a result of
> the pages having been evicted from the guest.

I think Michal asked for a performance comparison against Nitesh's
approach, to evaluate if keeping the reported state + tracking inside
the buddy is really worth it. Do you have any such numbers already? (or
did my tired eyes miss them in this cover letter? :/)

-- 

Thanks,

David / dhildenb


[PATCH v11 0/6] mm / virtio: Provide support for unused page reporting

2019-10-01 Thread Alexander Duyck
This series provides an asynchronous means of reporting to a hypervisor
that a guest page is no longer in use and can have the data associated
with it dropped. To do this I have implemented functionality that allows
for what I am referring to as unused page reporting. The advantage of
unused page reporting is that we can support a significant amount of
memory over-commit with improved performance as we can avoid having to
write/read memory from swap as the VM will instead actively participate
in freeing unused memory so it doesn't have to be written.

The functionality for this is fairly simple. When enabled it will allocate
statistics to track the number of reported pages in a given free area.
When the number of free pages exceeds this value plus a high water value,
currently 32, it will begin performing page reporting which consists of
pulling non-reported pages off of the free lists of a given zone and
placing them into a scatterlist. The scatterlist is then given to the page
reporting device and it will perform the required action to make the pages
"reported", in the case of virtio-balloon this results in the pages being
madvised as MADV_DONTNEED. After this they are placed back on their
original free list. If they are not merged in freeing an additional bit is
set indicating that they are a "reported" buddy page instead of a standard
buddy page. The cycle then repeats with additional non-reported pages
being pulled until the free areas all consist of reported pages.

In order to try and keep the time needed to find a non-reported page to
a minimum we maintain a "reported_boundary" pointer. This pointer is used
by the get_unreported_pages iterator to determine at what point it should
resume searching for non-reported pages. In order to guarantee pages do
not get past the scan I have modified add_to_free_list_tail so that it
will not insert pages behind the reported_boundary. Doing this allows us
to keep the overhead to a minimum as re-walking the list without the
boundary will result in as much as 18% additional overhead on a 32G VM.

If another process needs to perform a massive manipulation of the free
list, such as compaction, it can either reset a given individual boundary
which will push the boundary back to the list_head, or it can clear the
bit indicating the zone is actively processing which will result in the
reporting process resetting all of the boundaries for a given zone.

I am leaving a number of things hard-coded such as limiting the lowest
order processed to pageblock_order, and have left it up to the guest to
determine what the limit is on how many pages it wants to allocate to
process the hints. The upper limit for this is based on the size of the
queue used to store the scatterlist.

I wanted to avoid gaming the performance testing for this. As far as
possible gain a significant performance improvement should be visible in
cases where guests are forced to write/read from swap. As such, testing
it would be more of a benchmark of copying a page from swap versus just
allocating a zero page. I have been verifying that the memory is being
freed using memhog to allocate all the memory on the guest, and then
watching /proc/meminfo to verify the host sees the memory returned after
the test completes.

As far as possible regressions I have focused on cases where performing
the hinting would be non-optimal, such as cases where the code isn't
needed as memory is not over-committed, or the functionality is not in
use. I have been using the will-it-scale/page_fault1 test running with 16
vcpus and have modified it to use Transparent Huge Pages. With this I see
almost no difference with the patches applied and the feature disabled.
Likewise I see almost no difference with the feature enabled, but the
madvise disabled in the hypervisor due to a device being assigned. With
the feature fully enabled in both guest and hypervisor I see a regression
between -1.86% and -8.84% versus the baseline. I found that most of the
overhead was due to the page faulting/zeroing that comes as a result of
the pages having been evicted from the guest.

For info on earlier versions you will need to follow the links provided
with the respective versions.

Changes from v9:
https://lore.kernel.org/lkml/20190907172225.10910.34302.stgit@localhost.localdomain/
Updated cover page
Dropped per-cpu page randomization entropy patch
Added "to_tail" boolean value to __free_one_page to improve readability
Renamed __shuffle_pick_tail to shuffle_pick_tail, avoiding extra inline function
Dropped arm64 HUGLE_TLB_ORDER movement patch since it is no longer needed
Significant rewrite of page reporting functionality
  Updated logic to support interruptions from compaction
  get_unreported_page will now walk through reported sections
  Moved free_list manipulators out of mmzone.h and into page_alloc.c
  Removed page_reporting.h include from mmzone.h
  Split page_reporting.h between include/linux/ and mm/
  Added #include " to