Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-18 Thread Wei Wang

On 12/17/2017 11:16 PM, Tetsuo Handa wrote:

Wang, Wei W wrote:

Wei Wang wrote:

But passing GFP_NOWAIT means that we can handle allocation failure.
There is no need to use preload approach when we can handle allocation failure.

I think the reason we need xb_preload is because radix tree insertion
needs the memory being preallocated already (it couldn't suffer from
memory failure during the process of inserting, probably because
handling the failure there isn't easy, Matthew may know the backstory
of
this)

According to https://lwn.net/Articles/175432/ , I think that preloading is
needed only when failure to insert an item into a radix tree is a significant
problem.
That is, when failure to insert an item into a radix tree is not a problem, I
think that we don't need to use preloading.

It also mentions that the preload attempts to allocate sufficient memory to 
*guarantee* that the next radix tree insertion cannot fail.

If we check radix_tree_node_alloc(), the comments there says "this assumes that the 
caller has performed appropriate preallocation".

If you read what radix_tree_node_alloc() is doing, you will find that
radix_tree_node_alloc() returns NULL when memory allocation failed.

I think that "this assumes that the caller has performed appropriate 
preallocation"
means "The caller has to perform appropriate preallocation if the caller does 
not
want radix_tree_node_alloc() to return NULL".


For the radix tree, I agree that we may not need preload. But 
ida_bitmap, which the xbitmap is based on, is allocated via preload, so 
I think we cannot bypass preload, otherwise, we get no ida_bitmap to use.


Best,
Wei







Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-17 Thread Matthew Wilcox
On Mon, Dec 18, 2017 at 10:33:00AM +0800, Wei Wang wrote:
> > My only qualm is that I've been considering optimising the memory
> > consumption when an entire 1024-bit chunk is full; instead of keeping a
> > pointer to a 128-byte entry full of ones, store a special value in the
> > radix tree which means "every bit is set".
> > 
> > The downside is that we then have to pass GFP flags to xbit_clear() and
> > xbit_zero(), and they can fail.  It's not clear to me whether that's a
> > good tradeoff.
> 
> Yes, this will sacrifice performance. In many usages, users may set bits one
> by one, and each time when a bit is set, it needs to scan the whole
> ida_bitmap to see if all other bits are set, if so, it can free the
> ida_bitmap. I think this extra scanning of the ida_bitmap would add a lot
> overhead.

Not a huge amount of overhead.  An ida_bitmap is only two cachelines,
and the loop is simply 'check each word against ~0ul', so up to 16
load/test/loop instructions.  Plus we have to do that anyway to maintain
the free tag for IDAs.

> > But I need to get the XArray (which replaces the radix tree) finished first.
> 
> OK. It seems the new implementation wouldn't be done shortly.
> Other parts of this patch series are close to the end of review, and we hope
> to make some progress soon. Would it be acceptable that we continue with the
> basic xb_ implementation (e.g. as xbitmap 1.0) for this patch series? and
> xbit_ implementation can come as xbitmap 2.0 in the future?

Yes, absolutely, I don't want to hold you up behind the XArray.



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-17 Thread Wei Wang

On 12/18/2017 06:18 AM, Matthew Wilcox wrote:

On Sun, Dec 17, 2017 at 01:47:21PM +, Wang, Wei W wrote:

On Saturday, December 16, 2017 3:22 AM, Matthew Wilcox wrote:

On Fri, Dec 15, 2017 at 10:49:15AM -0800, Matthew Wilcox wrote:
  - xbit_clear() can't return an error.  Neither can xbit_zero().

I found the current xbit_clear implementation only returns 0, and there isn't an error to 
be returned from this function. In this case, is it better to make the function 
"void"?

Yes, I think so.

My only qualm is that I've been considering optimising the memory
consumption when an entire 1024-bit chunk is full; instead of keeping a
pointer to a 128-byte entry full of ones, store a special value in the
radix tree which means "every bit is set".

The downside is that we then have to pass GFP flags to xbit_clear() and
xbit_zero(), and they can fail.  It's not clear to me whether that's a
good tradeoff.


Yes, this will sacrifice performance. In many usages, users may set bits 
one by one, and each time when a bit is set, it needs to scan the whole 
ida_bitmap to see if all other bits are set, if so, it can free the 
ida_bitmap. I think this extra scanning of the ida_bitmap would add a 
lot overhead.






Are you suggesting to rename the current xb_ APIs to the above xbit_ names 
(with parameter changes)?

Why would we need xbit_alloc, which looks like ida_get_new, I think set/clear 
should be adequate to the current usages.

I'm intending on replacing the xb_ and ida_ implementations with this one.
It removes the preload API which makes it easier to use, and it handles
the locking for you.

But I need to get the XArray (which replaces the radix tree) finished first.


OK. It seems the new implementation wouldn't be done shortly.
Other parts of this patch series are close to the end of review, and we 
hope to make some progress soon. Would it be acceptable that we continue 
with the basic xb_ implementation (e.g. as xbitmap 1.0) for this patch 
series? and xbit_ implementation can come as xbitmap 2.0 in the future?


Best,
Wei






Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-17 Thread Matthew Wilcox
On Sun, Dec 17, 2017 at 01:47:21PM +, Wang, Wei W wrote:
> On Saturday, December 16, 2017 3:22 AM, Matthew Wilcox wrote:
> > On Fri, Dec 15, 2017 at 10:49:15AM -0800, Matthew Wilcox wrote:
> > > Here's the API I'm looking at right now.  The user need take no lock;
> > > the locking (spinlock) is handled internally to the implementation.
> 
> Another place I saw your comment " The xb_ API requires you to handle your 
> own locking" which seems conflict with the above "the user need take no lock".
> Doesn't the caller need a lock to avoid concurrent accesses to the ida bitmap?

Yes, the xb_ implementation requires you to handle your own locking.
The xbit_ API that I'm proposing will take care of the locking for you.
There's also no preallocation in the API.

> We'll change it to "bool xb_find_set(.., unsigned long *result)", returning 
> false indicates no "1" bit is found.

I put a replacement proposal in the next paragraph:
bool xbit_find_set(struct xbitmap *, unsigned long *start, unsigned long max);

Maybe 'start' is the wrong name for that parameter.  Let's call it 'bit'.
It's both "where to start" and "first bit found".

> >  - xbit_clear() can't return an error.  Neither can xbit_zero().
> 
> I found the current xbit_clear implementation only returns 0, and there isn't 
> an error to be returned from this function. In this case, is it better to 
> make the function "void"?

Yes, I think so.

My only qualm is that I've been considering optimising the memory
consumption when an entire 1024-bit chunk is full; instead of keeping a
pointer to a 128-byte entry full of ones, store a special value in the
radix tree which means "every bit is set".

The downside is that we then have to pass GFP flags to xbit_clear() and
xbit_zero(), and they can fail.  It's not clear to me whether that's a
good tradeoff.

> Are you suggesting to rename the current xb_ APIs to the above xbit_ names 
> (with parameter changes)? 
> 
> Why would we need xbit_alloc, which looks like ida_get_new, I think set/clear 
> should be adequate to the current usages.

I'm intending on replacing the xb_ and ida_ implementations with this one.
It removes the preload API which makes it easier to use, and it handles
the locking for you.

But I need to get the XArray (which replaces the radix tree) finished first.



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-17 Thread Tetsuo Handa
Wang, Wei W wrote:
> > Wei Wang wrote:
> > > > But passing GFP_NOWAIT means that we can handle allocation failure.
> > > > There is no need to use preload approach when we can handle allocation 
> > > > failure.
> > >
> > > I think the reason we need xb_preload is because radix tree insertion
> > > needs the memory being preallocated already (it couldn't suffer from
> > > memory failure during the process of inserting, probably because
> > > handling the failure there isn't easy, Matthew may know the backstory
> > > of
> > > this)
> > 
> > According to https://lwn.net/Articles/175432/ , I think that preloading is
> > needed only when failure to insert an item into a radix tree is a 
> > significant
> > problem.
> > That is, when failure to insert an item into a radix tree is not a problem, 
> > I
> > think that we don't need to use preloading.
> 
> It also mentions that the preload attempts to allocate sufficient memory to 
> *guarantee* that the next radix tree insertion cannot fail.
> 
> If we check radix_tree_node_alloc(), the comments there says "this assumes 
> that the caller has performed appropriate preallocation".

If you read what radix_tree_node_alloc() is doing, you will find that
radix_tree_node_alloc() returns NULL when memory allocation failed.

I think that "this assumes that the caller has performed appropriate 
preallocation"
means "The caller has to perform appropriate preallocation if the caller does 
not
want radix_tree_node_alloc() to return NULL".

> 
> So, I think we would get a risk of triggering some issue without preload().
> 
> > >
> > > So, I think we can handle the memory failure with xb_preload, which
> > > stops going into the radix tree APIs, but shouldn't call radix tree
> > > APIs without the related memory preallocated.
> > 
> > It seems to me that virtio-ballon case has no problem without using
> > preloading.
> 
> Why is that?
> 

Because you are saying in PATCH 4/7 that it is OK to fail xb_set_page()
due to -ENOMEM (apart from lack of ability to fallback to !use_sg path
when all xb_set_page() calls failed (i.e. no page will be handled because
there is no "1" bit in the xbitmap)).


+static inline int xb_set_page(struct virtio_balloon *vb,
+  struct page *page,
+  unsigned long *pfn_min,
+  unsigned long *pfn_max)
+{
+   unsigned long pfn = page_to_pfn(page);
+   int ret;
+
+   *pfn_min = min(pfn, *pfn_min);
+   *pfn_max = max(pfn, *pfn_max);
+
+   do {
+   ret = xb_preload_and_set_bit(>page_xb, pfn,
+GFP_NOWAIT | __GFP_NOWARN);
+   } while (unlikely(ret == -EAGAIN));
+
+   return ret;
+}

@@ -173,8 +290,15 @@ static unsigned fill_balloon(struct virtio_balloon *vb, 
size_t num)
 
while ((page = balloon_page_pop())) {
balloon_page_enqueue(>vb_dev_info, page);
+   if (use_sg) {
+   if (xb_set_page(vb, page, _min, _max) < 0) {
+   __free_page(page);
+   continue;
+   }
+   } else {
+   set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
+   }
 

@@ -223,7 +354,14 @@ static unsigned leak_balloon(struct virtio_balloon *vb, 
size_t num)
page = balloon_page_dequeue(vb_dev_info);
if (!page)
break;
-   set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
+   if (use_sg) {
+   if (xb_set_page(vb, page, _min, _max) < 0) {
+   balloon_page_enqueue(>vb_dev_info, page);
+   break;
+   }
+   } else {
+   set_page_pfns(vb, vb->pfns + vb->num_pfns, page);
+   }
list_add(>lru, );
vb->num_pages -= VIRTIO_BALLOON_PAGES_PER_PAGE;
}



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-17 Thread Wang, Wei W
On Saturday, December 16, 2017 3:22 AM, Matthew Wilcox wrote:
> On Fri, Dec 15, 2017 at 10:49:15AM -0800, Matthew Wilcox wrote:
> > Here's the API I'm looking at right now.  The user need take no lock;
> > the locking (spinlock) is handled internally to the implementation.

Another place I saw your comment " The xb_ API requires you to handle your own 
locking" which seems conflict with the above "the user need take no lock".
Doesn't the caller need a lock to avoid concurrent accesses to the ida bitmap?


> I looked at the API some more and found some flaws:
>  - how does xbit_alloc communicate back which bit it allocated?
>  - What if xbit_find_set() is called on a completely empty array with
>a range of 0, ULONG_MAX -- there's no invalid number to return.

We'll change it to "bool xb_find_set(.., unsigned long *result)", returning 
false indicates no "1" bit is found.


>  - xbit_clear() can't return an error.  Neither can xbit_zero().

I found the current xbit_clear implementation only returns 0, and there isn't 
an error to be returned from this function. In this case, is it better to make 
the function "void"?


>  - Need to add __must_check to various return values to discourage sloppy
>programming
> 
> So I modify the proposed API we compete with thusly:
> 
> bool xbit_test(struct xbitmap *, unsigned long bit); int __must_check
> xbit_set(struct xbitmap *, unsigned long bit, gfp_t); void xbit_clear(struct
> xbitmap *, unsigned long bit); int __must_check xbit_alloc(struct xbitmap *,
> unsigned long *bit, gfp_t);
> 
> int __must_check xbit_fill(struct xbitmap *, unsigned long start,
> unsigned long nbits, gfp_t); void xbit_zero(struct 
> xbitmap *,
> unsigned long start, unsigned long nbits); int __must_check
> xbit_alloc_range(struct xbitmap *, unsigned long *bit,
> unsigned long nbits, gfp_t);
> 
> bool xbit_find_clear(struct xbitmap *, unsigned long *start, unsigned long
> max); bool xbit_find_set(struct xbitmap *, unsigned long *start, unsigned
> long max);
> 
> (I'm a little sceptical about the API accepting 'max' for the find functions 
> and
> 'nbits' in the fill/zero/alloc_range functions, but I think that matches how
> people want to use it, and it matches how bitmap.h works)

Are you suggesting to rename the current xb_ APIs to the above xbit_ names 
(with parameter changes)? 

Why would we need xbit_alloc, which looks like ida_get_new, I think set/clear 
should be adequate to the current usages.

Best,
Wei






Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-17 Thread Wang, Wei W


> -Original Message-
> From: Tetsuo Handa [mailto:penguin-ker...@i-love.sakura.ne.jp]
> Sent: Sunday, December 17, 2017 6:22 PM
> To: Wang, Wei W ; wi...@infradead.org
> Cc: virtio-...@lists.oasis-open.org; linux-ker...@vger.kernel.org; qemu-
> de...@nongnu.org; virtualizat...@lists.linux-foundation.org;
> k...@vger.kernel.org; linux...@kvack.org; m...@redhat.com;
> mho...@kernel.org; a...@linux-foundation.org; mawil...@microsoft.com;
> da...@redhat.com; cornelia.h...@de.ibm.com;
> mgor...@techsingularity.net; aarca...@redhat.com;
> amit.s...@redhat.com; pbonz...@redhat.com;
> liliang.opensou...@gmail.com; yang.zhang...@gmail.com;
> quan...@aliyun.com; ni...@redhat.com; r...@redhat.com
> Subject: Re: [PATCH v19 3/7] xbitmap: add more operations
> 
> Wei Wang wrote:
> > > But passing GFP_NOWAIT means that we can handle allocation failure.
> > > There is no need to use preload approach when we can handle allocation
> failure.
> >
> > I think the reason we need xb_preload is because radix tree insertion
> > needs the memory being preallocated already (it couldn't suffer from
> > memory failure during the process of inserting, probably because
> > handling the failure there isn't easy, Matthew may know the backstory
> > of
> > this)
> 
> According to https://lwn.net/Articles/175432/ , I think that preloading is
> needed only when failure to insert an item into a radix tree is a significant
> problem.
> That is, when failure to insert an item into a radix tree is not a problem, I
> think that we don't need to use preloading.

It also mentions that the preload attempts to allocate sufficient memory to 
*guarantee* that the next radix tree insertion cannot fail.

If we check radix_tree_node_alloc(), the comments there says "this assumes that 
the caller has performed appropriate preallocation".

So, I think we would get a risk of triggering some issue without preload().

> >
> > So, I think we can handle the memory failure with xb_preload, which
> > stops going into the radix tree APIs, but shouldn't call radix tree
> > APIs without the related memory preallocated.
> 
> It seems to me that virtio-ballon case has no problem without using
> preloading.

Why is that?

Best,
Wei




Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-17 Thread Tetsuo Handa
Wei Wang wrote:
> > But passing GFP_NOWAIT means that we can handle allocation failure. There is
> > no need to use preload approach when we can handle allocation failure.
> 
> I think the reason we need xb_preload is because radix tree insertion 
> needs the memory being preallocated already (it couldn't suffer from 
> memory failure during the process of inserting, probably because 
> handling the failure there isn't easy, Matthew may know the backstory of 
> this)

According to https://lwn.net/Articles/175432/ , I think that preloading is 
needed
only when failure to insert an item into a radix tree is a significant problem.
That is, when failure to insert an item into a radix tree is not a problem,
I think that we don't need to use preloading.

> 
> So, I think we can handle the memory failure with xb_preload, which 
> stops going into the radix tree APIs, but shouldn't call radix tree APIs 
> without the related memory preallocated.

It seems to me that virtio-ballon case has no problem without using preloading.



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-16 Thread Wei Wang

On 12/16/2017 07:28 PM, Tetsuo Handa wrote:

Wei Wang wrote:

On 12/16/2017 02:42 AM, Matthew Wilcox wrote:

On Tue, Dec 12, 2017 at 07:55:55PM +0800, Wei Wang wrote:

+int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp);

I'm struggling to understand when one would use this.  The xb_ API
requires you to handle your own locking.  But specifying GFP flags
here implies you can sleep.  So ... um ... there's no locking?

In the regular use cases, people would do xb_preload() before taking the
lock, and the xb_set/clear within the lock.

In the virtio-balloon usage, we have a large number of bits to set with
the balloon_lock being held (we're not unlocking for each bit), so we
used the above wrapper to do preload and set within the balloon_lock,
and passed in GFP_NOWAIT to avoid sleeping. Probably we can change to
put this wrapper implementation to virtio-balloon, since it would not be
useful for the regular cases.

GFP_NOWAIT is chosen in order not to try to OOM-kill something, isn't it?


Yes, I think that's right the issue we are discussing here (also 
discussed in the deadlock patch before): Suppose we use a sleep-able 
flag GFP_KERNEL, which gets the caller (fill_balloon or leak_balloon) 
into sleep with balloon_lock being held, and the memory reclaiming from 
GFP_KERNEL would fall into the OOM code path which first invokes the 
oom_notify-->leak_balloon to release some balloon memory, which needs to 
take the balloon_lock that is being held by the task who is sleeping.


So, using GFP_NOWAIT avoids sleeping to get memory through directly 
memory reclaiming, which could fall into that OOM code path that needs 
to take the balloon_lock.




But passing GFP_NOWAIT means that we can handle allocation failure. There is
no need to use preload approach when we can handle allocation failure.


I think the reason we need xb_preload is because radix tree insertion 
needs the memory being preallocated already (it couldn't suffer from 
memory failure during the process of inserting, probably because 
handling the failure there isn't easy, Matthew may know the backstory of 
this)


So, I think we can handle the memory failure with xb_preload, which 
stops going into the radix tree APIs, but shouldn't call radix tree APIs 
without the related memory preallocated.


Best,
Wei







Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-16 Thread Tetsuo Handa
Wei Wang wrote:
> On 12/16/2017 02:42 AM, Matthew Wilcox wrote:
> > On Tue, Dec 12, 2017 at 07:55:55PM +0800, Wei Wang wrote:
> >> +int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp);
> > I'm struggling to understand when one would use this.  The xb_ API
> > requires you to handle your own locking.  But specifying GFP flags
> > here implies you can sleep.  So ... um ... there's no locking?
> 
> In the regular use cases, people would do xb_preload() before taking the 
> lock, and the xb_set/clear within the lock.
> 
> In the virtio-balloon usage, we have a large number of bits to set with 
> the balloon_lock being held (we're not unlocking for each bit), so we 
> used the above wrapper to do preload and set within the balloon_lock, 
> and passed in GFP_NOWAIT to avoid sleeping. Probably we can change to 
> put this wrapper implementation to virtio-balloon, since it would not be 
> useful for the regular cases.

GFP_NOWAIT is chosen in order not to try to OOM-kill something, isn't it?
But passing GFP_NOWAIT means that we can handle allocation failure. There is
no need to use preload approach when we can handle allocation failure.



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-16 Thread Wei Wang

On 12/15/2017 12:29 AM, Tetsuo Handa wrote:

Wei Wang wrote:

I used the example of xb_clear_bit_range(), and xb_find_next_bit() is
the same fundamentally. Please let me know if anywhere still looks fuzzy.

I don't think it is the same for xb_find_next_bit() with set == 0.

+   if (radix_tree_exception(bmap)) {
+   unsigned long tmp = (unsigned long)bmap;
+   unsigned long ebit = bit + 2;
+
+   if (ebit >= BITS_PER_LONG)
+   continue;
+   if (set)
+   ret = find_next_bit(, BITS_PER_LONG, ebit);
+   else
+   ret = find_next_zero_bit(, BITS_PER_LONG,
+ebit);
+   if (ret < BITS_PER_LONG)
+   return ret - 2 + IDA_BITMAP_BITS * index;

What I'm saying is that find_next_zero_bit() will not be called if you do
"if (ebit >= BITS_PER_LONG) continue;" before calling find_next_zero_bit().

When scanning 
"0001",
"bit < BITS_PER_LONG - 2" case finds "0" in this word but
"bit >= BITS_PER_LONG - 2" case finds "0" in next word or segment.

I can't understand why this is correct behavior. It is too much puzzling.



OK, I'll post out a version without the exceptional path.

Best,
Wei




Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-16 Thread Wei Wang

On 12/16/2017 02:42 AM, Matthew Wilcox wrote:

On Tue, Dec 12, 2017 at 07:55:55PM +0800, Wei Wang wrote:

+int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp);

I'm struggling to understand when one would use this.  The xb_ API
requires you to handle your own locking.  But specifying GFP flags
here implies you can sleep.  So ... um ... there's no locking?


In the regular use cases, people would do xb_preload() before taking the 
lock, and the xb_set/clear within the lock.


In the virtio-balloon usage, we have a large number of bits to set with 
the balloon_lock being held (we're not unlocking for each bit), so we 
used the above wrapper to do preload and set within the balloon_lock, 
and passed in GFP_NOWAIT to avoid sleeping. Probably we can change to 
put this wrapper implementation to virtio-balloon, since it would not be 
useful for the regular cases.



Best,
Wei



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-15 Thread Tetsuo Handa
Matthew Wilcox wrote:
> On Sat, Dec 16, 2017 at 01:31:24PM +0900, Tetsuo Handa wrote:
> > Michael S. Tsirkin wrote:
> > > On Sat, Dec 16, 2017 at 01:21:52AM +0900, Tetsuo Handa wrote:
> > > > My understanding is that virtio-balloon wants to handle sparsely 
> > > > spreaded
> > > > unsigned long values (which is PATCH 4/7) and wants to find all chunks 
> > > > of
> > > > consecutive "1" bits efficiently. Therefore, I guess that holding the 
> > > > values
> > > > in ascending order at store time is faster than sorting the values at 
> > > > read
> > > > time.
> 
> What makes you think that the radix tree (also xbitmap, also idr) doesn't
> sort the values at store time?

I don't care whether the radix tree sorts the values at store time.
What I care is how to read stored values in ascending order with less overhead.

Existing users are too much optimized and difficult to understand for new
comers. I appreciate if there are simple sample codes which explain how to
use library functions and which can be compiled/tested in userspace.
Your "- look at ->head, see it is NULL, return false." answer did not
give me any useful information.

> 
> > I'm asking whether we really need to invent a new library module (i.e.
> > PATCH 1/7 + PATCH 2/7 + PATCH 3/7) for virtio-balloon compared to mine.
> > 
> > What virtio-balloon needs is ability to
> > 
> >   (1) record any integer value in [0, ULONG_MAX] range
> > 
> >   (2) fetch all recorded values, with consecutive values combined in
> >   min,max (or start,count) form for efficiently
> > 
> > and I wonder whether we need to invent complete API set which
> > Matthew Wilcox and Wei Wang are planning for generic purpose.
> 
> The xbitmap absolutely has that ability.

Current patches are too tricky to review.
When will all corner cases be closed?

>   And making it generic code
> means more people see it, use it, debug it, optimise it.

I'm not objecting against generic code. But trying to optimize it can
enbug it, like using exception path keeps me difficult to review whether
the implementation is correct. I'm suggesting to start xbitmap without
exception path, and I haven't seen a version without exception path.

>   I originally
> wrote the implementation for bcache, when Kent was complaining we didn't
> have such a thing.  His needs weren't as complex as Wei's, which is why
> I hadn't implemented everything that Wei needed.
> 

Unless current xbitmap patches become clear, virtio-balloon changes won't
be able to be get merged. We are repeating this series without closing
many bugs. We can start virtio-balloon changes with a stub code which
provides (1) and (2), and that's my version.



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-15 Thread Matthew Wilcox
On Sat, Dec 16, 2017 at 01:31:24PM +0900, Tetsuo Handa wrote:
> Michael S. Tsirkin wrote:
> > On Sat, Dec 16, 2017 at 01:21:52AM +0900, Tetsuo Handa wrote:
> > > My understanding is that virtio-balloon wants to handle sparsely spreaded
> > > unsigned long values (which is PATCH 4/7) and wants to find all chunks of
> > > consecutive "1" bits efficiently. Therefore, I guess that holding the 
> > > values
> > > in ascending order at store time is faster than sorting the values at read
> > > time.

What makes you think that the radix tree (also xbitmap, also idr) doesn't
sort the values at store time?

> I'm asking whether we really need to invent a new library module (i.e.
> PATCH 1/7 + PATCH 2/7 + PATCH 3/7) for virtio-balloon compared to mine.
> 
> What virtio-balloon needs is ability to
> 
>   (1) record any integer value in [0, ULONG_MAX] range
> 
>   (2) fetch all recorded values, with consecutive values combined in
>   min,max (or start,count) form for efficiently
> 
> and I wonder whether we need to invent complete API set which
> Matthew Wilcox and Wei Wang are planning for generic purpose.

The xbitmap absolutely has that ability.  And making it generic code
means more people see it, use it, debug it, optimise it.  I originally
wrote the implementation for bcache, when Kent was complaining we didn't
have such a thing.  His needs weren't as complex as Wei's, which is why
I hadn't implemented everything that Wei needed.



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-15 Thread Tetsuo Handa
Michael S. Tsirkin wrote:
> On Sat, Dec 16, 2017 at 01:21:52AM +0900, Tetsuo Handa wrote:
> > My understanding is that virtio-balloon wants to handle sparsely spreaded
> > unsigned long values (which is PATCH 4/7) and wants to find all chunks of
> > consecutive "1" bits efficiently. Therefore, I guess that holding the values
> > in ascending order at store time is faster than sorting the values at read
> > time.
> 
> Are you asking why is a bitmap used here, as opposed to a tree?

No. I'm OK with "segments using trees" + "offsets using bitmaps".

>  It's
> not just store versus read. There's also the issue that memory can get
> highly fragmented, if it is, the number of 1s is potentially very high.
> A bitmap can use as little as 1 bit per value, it is hard to beat in
> this respect.
> 

I'm asking whether we really need to invent a new library module (i.e.
PATCH 1/7 + PATCH 2/7 + PATCH 3/7) for virtio-balloon compared to mine.

What virtio-balloon needs is ability to

  (1) record any integer value in [0, ULONG_MAX] range

  (2) fetch all recorded values, with consecutive values combined in
  min,max (or start,count) form for efficiently

and I wonder whether we need to invent complete API set which
Matthew Wilcox and Wei Wang are planning for generic purpose.



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-15 Thread Matthew Wilcox
On Fri, Dec 15, 2017 at 10:49:15AM -0800, Matthew Wilcox wrote:
> Here's the API I'm looking at right now.  The user need take no lock;
> the locking (spinlock) is handled internally to the implementation.

I looked at the API some more and found some flaws:
 - how does xbit_alloc communicate back which bit it allocated?
 - What if xbit_find_set() is called on a completely empty array with
   a range of 0, ULONG_MAX -- there's no invalid number to return.
 - xbit_clear() can't return an error.  Neither can xbit_zero().
 - Need to add __must_check to various return values to discourage sloppy
   programming

So I modify the proposed API we compete with thusly:

bool xbit_test(struct xbitmap *, unsigned long bit);
int __must_check xbit_set(struct xbitmap *, unsigned long bit, gfp_t);
void xbit_clear(struct xbitmap *, unsigned long bit);
int __must_check xbit_alloc(struct xbitmap *, unsigned long *bit, gfp_t);

int __must_check xbit_fill(struct xbitmap *, unsigned long start,
unsigned long nbits, gfp_t);
void xbit_zero(struct xbitmap *, unsigned long start, unsigned long nbits);
int __must_check xbit_alloc_range(struct xbitmap *, unsigned long *bit,
unsigned long nbits, gfp_t);

bool xbit_find_clear(struct xbitmap *, unsigned long *start, unsigned long max);
bool xbit_find_set(struct xbitmap *, unsigned long *start, unsigned long max);

(I'm a little sceptical about the API accepting 'max' for the find
functions and 'nbits' in the fill/zero/alloc_range functions, but I think
that matches how people want to use it, and it matches how bitmap.h works)



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-15 Thread Matthew Wilcox
On Sat, Dec 16, 2017 at 01:21:52AM +0900, Tetsuo Handa wrote:
> My understanding is that virtio-balloon wants to handle sparsely spreaded
> unsigned long values (which is PATCH 4/7) and wants to find all chunks of
> consecutive "1" bits efficiently. Therefore, I guess that holding the values
> in ascending order at store time is faster than sorting the values at read
> time. I don't know how to use radix tree API, but I think that B+ tree API
> suits for holding the values in ascending order.
> 
> We wait for Wei to post radix tree version combined into one patch and then
> compare performance between radix tree version and B+ tree version (shown
> below)?

Sure.  We all benefit from some friendly competition.  Even if a
competition between trees might remind one of the Entmoot ;-)

But let's not hold back -- let's figure out some good workloads to use
in our competition.  And we should also decide on the API / locking
constraints.  And of course we should compete based on not just speed,
but also memory consumption (both as a runtime overhead for a given set
of bits and as code size).  If you can replace the IDR, you get to count
that savings against the cost of your implementation.

Here's the API I'm looking at right now.  The user need take no lock;
the locking (spinlock) is handled internally to the implementation.

void xbit_init(struct xbitmap *xb);
int xbit_alloc(struct xbitmap *, unsigned long bit, gfp_t);
int xbit_alloc_range(struct xbitmap *, unsigned long start,
unsigned long nbits, gfp_t);
int xbit_set(struct xbitmap *, unsigned long bit, gfp_t);
bool xbit_test(struct xbitmap *, unsigned long bit);
int xbit_clear(struct xbitmap *, unsigned long bit);
int xbit_zero(struct xbitmap *, unsigned long start, unsigned long nbits);
int xbit_fill(struct xbitmap *, unsigned long start, unsigned long nbits,
gfp_t);
unsigned long xbit_find_clear(struct xbitmap *, unsigned long start,
unsigned long max);
unsigned long xbit_find_set(struct xbitmap *, unsigned long start,
unsigned long max);

> static bool set_ulong(struct ulong_list_head *head, const unsigned long value)
> {
>   if (!ptr) {
>   ptr = kzalloc(sizeof(*ptr), GFP_NOWAIT | __GFP_NOWARN);
>   if (!ptr)
>   goto out1;
>   ptr->bitmap = kzalloc(BITMAP_LEN / 8,
> GFP_NOWAIT | __GFP_NOWARN);
>   if (!ptr->bitmap)
>   goto out2;
>   if (btree_insertl(>btree, ~segment, ptr,
>  GFP_NOWAIT | __GFP_NOWARN))
>   goto out3;
> out3:
>   kfree(ptr->bitmap);
> out2:
>   kfree(ptr);
> out1:
>   return false;
> }

And what is the user supposed to do if this returns false?  How do they
make headway?  The xb_ API is clear -- you call xb_prealloc and that
ensures forward progress.




Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-15 Thread Matthew Wilcox
On Tue, Dec 12, 2017 at 07:55:55PM +0800, Wei Wang wrote:
> +int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp);

I'm struggling to understand when one would use this.  The xb_ API
requires you to handle your own locking.  But specifying GFP flags
here implies you can sleep.  So ... um ... there's no locking?

> +void xb_clear_bit_range(struct xb *xb, unsigned long start, unsigned long 
> end);

That's xb_zero() which you deleted with the previous patch ... remember,
keep things as close as possible to the bitmap API.




Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-15 Thread Michael S. Tsirkin
On Sat, Dec 16, 2017 at 01:21:52AM +0900, Tetsuo Handa wrote:
> My understanding is that virtio-balloon wants to handle sparsely spreaded
> unsigned long values (which is PATCH 4/7) and wants to find all chunks of
> consecutive "1" bits efficiently. Therefore, I guess that holding the values
> in ascending order at store time is faster than sorting the values at read
> time.

Are you asking why is a bitmap used here, as opposed to a tree?  It's
not just store versus read. There's also the issue that memory can get
highly fragmented, if it is, the number of 1s is potentially very high.
A bitmap can use as little as 1 bit per value, it is hard to beat in
this respect.

-- 
MST



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-15 Thread Tetsuo Handa
Matthew Wilcox wrote:
> On Fri, Dec 15, 2017 at 01:29:45AM +0900, Tetsuo Handa wrote:
> > > > Also, one more thing you need to check. Have you checked how long does
> > > > xb_find_next_set_bit(xb, 0, ULONG_MAX) on an empty xbitmap takes?
> > > > If it causes soft lockup warning, should we add cond_resched() ?
> > > > If yes, you have to document that this API might sleep. If no, you
> > > > have to document that the caller of this API is responsible for
> > > > not to pass such a large value range.
> > > 
> > > Yes, that will take too long time. Probably we can document some 
> > > comments as a reminder for the callers.
> > 
> > Then, I feel that API is poorly implemented. There is no need to brute-force
> > when scanning [0, ULONG_MAX] range. If you eliminate exception path and
> > redesign the data structure, xbitmap will become as simple as a sample
> > implementation shown below. Not tested yet, but I think that this will be
> > sufficient for what virtio-baloon wants to do; i.e. find consecutive "1" 
> > bits
> > quickly. I didn't test whether finding "struct ulong_list_data" using radix
> > tree can improve performance.
> 
> find_next_set_bit() is just badly implemented.  There is no need to
> redesign the data structure.  It should be a simple matter of:
> 
>  - look at ->head, see it is NULL, return false.
> 
> If bit 100 is set and you call find_next_set_bit(101, ULONG_MAX), it
> should look at block 0, see there is a pointer to it, scan the block,
> see there are no bits set above 100, then realise we're at the end of
> the tree and stop.
> 
> If bit 2000 is set, and you call find_next_set_bit(2001, ULONG_MAX)
> tit should look at block 1, see there's no bit set after bit 2001, then
> look at the other blocks in the node, see that all the pointers are NULL
> and stop.
> 
> This isn't rocket science, we already do something like this in the radix
> tree and it'll be even easier to do in the XArray.  Which I'm going back
> to working on now.
> 

My understanding is that virtio-balloon wants to handle sparsely spreaded
unsigned long values (which is PATCH 4/7) and wants to find all chunks of
consecutive "1" bits efficiently. Therefore, I guess that holding the values
in ascending order at store time is faster than sorting the values at read
time. I don't know how to use radix tree API, but I think that B+ tree API
suits for holding the values in ascending order.

We wait for Wei to post radix tree version combined into one patch and then
compare performance between radix tree version and B+ tree version (shown
below)?

--
#include 
#include 
#include 

#define BITMAP_LEN 1024

struct ulong_list_data {
/* Segment for this offset bitmap. */
unsigned long segment;
/* Number of bits set in this offset bitmap. */
unsigned long bits;
/* Offset bitmap of BITMAP_LEN bits. */
unsigned long *bitmap;
};

struct ulong_list_head {
struct btree_headl btree;
struct ulong_list_data *last_used;
};

static int init_ulong(struct ulong_list_head *head)
{
head->last_used = NULL;
return btree_initl(>btree);
}

static bool set_ulong(struct ulong_list_head *head, const unsigned long value)
{
struct ulong_list_data *ptr = head->last_used;
const unsigned long segment = value / BITMAP_LEN;
const unsigned long offset = value % BITMAP_LEN;

if (!ptr || ptr->segment != segment)
ptr = btree_lookupl(>btree, ~segment);
if (!ptr) {
ptr = kzalloc(sizeof(*ptr), GFP_NOWAIT | __GFP_NOWARN);
if (!ptr)
goto out1;
ptr->bitmap = kzalloc(BITMAP_LEN / 8,
  GFP_NOWAIT | __GFP_NOWARN);
if (!ptr->bitmap)
goto out2;
if (btree_insertl(>btree, ~segment, ptr,
   GFP_NOWAIT | __GFP_NOWARN))
goto out3;
ptr->segment = segment;
}
head->last_used = ptr;
if (!test_bit(offset, ptr->bitmap)) {
__set_bit(offset, ptr->bitmap);
ptr->bits++;
}
return true;
out3:
kfree(ptr->bitmap);
out2:
kfree(ptr);
out1:
return false;
}

static void clear_ulong(struct ulong_list_head *head, const unsigned long value)
{
struct ulong_list_data *ptr = head->last_used;
const unsigned long segment = value / BITMAP_LEN;
const unsigned long offset = value % BITMAP_LEN;

if (!ptr || ptr->segment != segment) {
ptr = btree_lookupl(>btree, ~segment);
if (!ptr)
return;
head->last_used = ptr;
}
if (!test_bit(offset, ptr->bitmap))
return;
__clear_bit(offset, ptr->bitmap);
if (--ptr->bits)
return;
btree_removel(>btree, ~segment);
if 

Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-14 Thread Matthew Wilcox
On Fri, Dec 15, 2017 at 01:29:45AM +0900, Tetsuo Handa wrote:
> > > Also, one more thing you need to check. Have you checked how long does
> > > xb_find_next_set_bit(xb, 0, ULONG_MAX) on an empty xbitmap takes?
> > > If it causes soft lockup warning, should we add cond_resched() ?
> > > If yes, you have to document that this API might sleep. If no, you
> > > have to document that the caller of this API is responsible for
> > > not to pass such a large value range.
> > 
> > Yes, that will take too long time. Probably we can document some 
> > comments as a reminder for the callers.
> 
> Then, I feel that API is poorly implemented. There is no need to brute-force
> when scanning [0, ULONG_MAX] range. If you eliminate exception path and
> redesign the data structure, xbitmap will become as simple as a sample
> implementation shown below. Not tested yet, but I think that this will be
> sufficient for what virtio-baloon wants to do; i.e. find consecutive "1" bits
> quickly. I didn't test whether finding "struct ulong_list_data" using radix
> tree can improve performance.

find_next_set_bit() is just badly implemented.  There is no need to
redesign the data structure.  It should be a simple matter of:

 - look at ->head, see it is NULL, return false.

If bit 100 is set and you call find_next_set_bit(101, ULONG_MAX), it
should look at block 0, see there is a pointer to it, scan the block,
see there are no bits set above 100, then realise we're at the end of
the tree and stop.

If bit 2000 is set, and you call find_next_set_bit(2001, ULONG_MAX)
tit should look at block 1, see there's no bit set after bit 2001, then
look at the other blocks in the node, see that all the pointers are NULL
and stop.

This isn't rocket science, we already do something like this in the radix
tree and it'll be even easier to do in the XArray.  Which I'm going back
to working on now.



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-14 Thread Tetsuo Handa
Wei Wang wrote:
> I used the example of xb_clear_bit_range(), and xb_find_next_bit() is 
> the same fundamentally. Please let me know if anywhere still looks fuzzy.

I don't think it is the same for xb_find_next_bit() with set == 0.

+   if (radix_tree_exception(bmap)) {
+   unsigned long tmp = (unsigned long)bmap;
+   unsigned long ebit = bit + 2;
+
+   if (ebit >= BITS_PER_LONG)
+   continue;
+   if (set)
+   ret = find_next_bit(, BITS_PER_LONG, ebit);
+   else
+   ret = find_next_zero_bit(, BITS_PER_LONG,
+ebit);
+   if (ret < BITS_PER_LONG)
+   return ret - 2 + IDA_BITMAP_BITS * index;

What I'm saying is that find_next_zero_bit() will not be called if you do
"if (ebit >= BITS_PER_LONG) continue;" before calling find_next_zero_bit().

When scanning 
"0001",
"bit < BITS_PER_LONG - 2" case finds "0" in this word but
"bit >= BITS_PER_LONG - 2" case finds "0" in next word or segment.

I can't understand why this is correct behavior. It is too much puzzling.



> > Also, one more thing you need to check. Have you checked how long does
> > xb_find_next_set_bit(xb, 0, ULONG_MAX) on an empty xbitmap takes?
> > If it causes soft lockup warning, should we add cond_resched() ?
> > If yes, you have to document that this API might sleep. If no, you
> > have to document that the caller of this API is responsible for
> > not to pass such a large value range.
> 
> Yes, that will take too long time. Probably we can document some 
> comments as a reminder for the callers.
> 

Then, I feel that API is poorly implemented. There is no need to brute-force
when scanning [0, ULONG_MAX] range. If you eliminate exception path and
redesign the data structure, xbitmap will become as simple as a sample
implementation shown below. Not tested yet, but I think that this will be
sufficient for what virtio-baloon wants to do; i.e. find consecutive "1" bits
quickly. I didn't test whether finding "struct ulong_list_data" using radix
tree can improve performance.


#include 
#include 

#define BITMAP_LEN 1024

struct ulong_list_data {
struct list_head list;
unsigned long segment; /* prev->segment < segment < next->segment */
unsigned long bits;/* Number of bits set in this offset bitmap. */
unsigned long *bitmap; /* Offset bitmap of BITMAP_LEN bits. */
};

static struct ulong_list_data null_ulong_list = {
{ NULL, NULL }, ULONG_MAX, 0, NULL
};

struct ulong_list_head {
struct list_head list;
struct ulong_list_data *last_used;
};

static void init_ulong(struct ulong_list_head *head)
{
INIT_LIST_HEAD(>list);
head->last_used = _ulong_list;
}

static bool set_ulong(struct ulong_list_head *head, const unsigned long value)
{
struct ulong_list_data *ptr = head->last_used;
struct list_head *list = >list;
const unsigned long segment = value / BITMAP_LEN;
const unsigned long offset = value % BITMAP_LEN;
bool found = false;

if (ptr->segment == segment)
goto shortcut;
list_for_each_entry(ptr, >list, list) {
if (ptr->segment < segment) {
list = >list;
continue;
}
found = ptr->segment == segment;
break;
}
if (!found) {
ptr = kzalloc(sizeof(*ptr), GFP_NOWAIT | __GFP_NOWARN);
if (!ptr)
return false;
ptr->bitmap = kzalloc(BITMAP_LEN / 8,
  GFP_NOWAIT | __GFP_NOWARN);
if (!ptr->bitmap) {
kfree(ptr);
return false;
}
ptr->segment = segment;
list_add(>list, list);
}
head->last_used = ptr;
 shortcut:
if (!test_bit(offset, ptr->bitmap)) {
__set_bit(offset, ptr->bitmap);
ptr->bits++;
}
return true;
}

static void clear_ulong(struct ulong_list_head *head, const unsigned long value)
{
struct ulong_list_data *ptr = head->last_used;
const unsigned long segment = value / BITMAP_LEN;
const unsigned long offset = value % BITMAP_LEN;

if (ptr->segment == segment)
goto shortcut;
list_for_each_entry(ptr, >list, list) {
if (ptr->segment < segment)
continue;
if (ptr->segment == segment) {
head->last_used = ptr;
shortcut:
if (test_bit(offset, ptr->bitmap)) {
  

Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-14 Thread Matthew Wilcox
On Wed, Dec 13, 2017 at 08:26:06PM +0800, Wei Wang wrote:
> On 12/12/2017 09:20 PM, Tetsuo Handa wrote:
> > Can you eliminate exception path and fold all xbitmap patches into one, and
> > post only one xbitmap patch without virtio-baloon changes? If exception path
> > is valuable, you can add exception path after minimum version is merged.
> > This series is too difficult for me to close corner cases.
> 
> That exception path is claimed to save memory, and I don't have a strong
> reason to remove that part.
> Matthew, could we get your feedback on this?

Sure.  This code is derived from the IDA code in lib/idr.c.  Eventually,
I intend to reunite them.  For IDA, it clearly makes sense; the first 62
entries result in allocating no memory at all, which is going to be 99%
of users.  After that, we allocate 128 bytes which will serve the first
1024 users.

The xbitmap, as used by Wei's patches here is going to be used somewhat
differently from that.  I understand why Tetsuo wants the exceptional
path removed; I'm not sure the gains will be as important.  But if we're
going to rebuild the IDA on top of the xbitmap, we need to keep them.

I really want to pay more attention to this, but I need to focus on
getting the XArray finished.



Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-13 Thread Wei Wang

On 12/13/2017 10:16 PM, Tetsuo Handa wrote:

Wei Wang wrote:

On 12/12/2017 09:20 PM, Tetsuo Handa wrote:

Wei Wang wrote:

+void xb_clear_bit_range(struct xb *xb, unsigned long start, unsigned long end)
+{
+   struct radix_tree_root *root = >xbrt;
+   struct radix_tree_node *node;
+   void **slot;
+   struct ida_bitmap *bitmap;
+   unsigned int nbits;
+
+   for (; start < end; start = (start | (IDA_BITMAP_BITS - 1)) + 1) {
+   unsigned long index = start / IDA_BITMAP_BITS;
+   unsigned long bit = start % IDA_BITMAP_BITS;
+
+   bitmap = __radix_tree_lookup(root, index, , );
+   if (radix_tree_exception(bitmap)) {
+   unsigned long ebit = bit + 2;
+   unsigned long tmp = (unsigned long)bitmap;
+
+   nbits = min(end - start + 1, BITS_PER_LONG - ebit);
+
+   if (ebit >= BITS_PER_LONG)

What happens if we hit this "continue;" when "index == ULONG_MAX / 
IDA_BITMAP_BITS" ?

Thanks. I also improved the test case for this. I plan to change the
implementation a little bit to avoid such overflow (has passed the test
case that I have, just post out for another set of eyes):

{
...
  unsigned long idx = start / IDA_BITMAP_BITS;
  unsigned long bit = start % IDA_BITMAP_BITS;
  unsigned long idx_end = end / IDA_BITMAP_BITS;
  unsigned long ret;

  for (idx = start / IDA_BITMAP_BITS; idx <= idx_end; idx++) {
  unsigned long ida_start = idx * IDA_BITMAP_BITS;

  bitmap = __radix_tree_lookup(root, idx, , );
  if (radix_tree_exception(bitmap)) {
  unsigned long tmp = (unsigned long)bitmap;
  unsigned long ebit = bit + 2;

  if (ebit >= BITS_PER_LONG)
  continue;

Will you please please do eliminate exception path?


Please first see my explanations below, I'll try to help you understand 
it thoroughly. If it is really too complex to understand it finally, 
then I think we can start from the fundamental part by removing the 
exceptional path if no objections from others.



I can't interpret what "ebit >= BITS_PER_LONG" means.
The reason you "continue;" is that all bits beyond are "0", isn't it?
Then, it would make sense to "continue;" when finding next "1" because
all bits beyond are "0". But how does it make sense to "continue;" when
finding next "0" despite all bits beyond are "0"?



Not the case actually. Please see this example:
1) xb_set_bit(10); // bit 10 is set, so an exceptional entry (i.e. 
[0:62]) is used

2) xb_clear_bit_range(66, 2048);
- One ida bitmap size is 1024 bits, so this clear will be performed 
with 2 loops, first to clear [66, 1024), second to clear [1024, 2048)
- When the first loop clears [66, 1024), and finds that it is an 
exception entry (because bit 10 is set, and the 62 bit entry is enough 
to cover). Another point we have to remember is that an exceptional 
entry implies that the rest of bits [63, 1024) are all 0s.
- The starting bit 66 already exceeds the the exceptional entry bit 
range [0, 62], and with the fact that the rest of bits are all 0s, so it 
is time to just "continue", which goes to the second range [1024, 2048)


I used the example of xb_clear_bit_range(), and xb_find_next_bit() is 
the same fundamentally. Please let me know if anywhere still looks fuzzy.






  if (set)
  ret = find_next_bit(,
BITS_PER_LONG, ebit);
  else
  ret = find_next_zero_bit(,
BITS_PER_LONG,
   ebit);
  if (ret < BITS_PER_LONG)
  return ret - 2 + ida_start;
  } else if (bitmap) {
  if (set)
  ret = find_next_bit(bitmap->bitmap,
  IDA_BITMAP_BITS, bit);
  else
  ret = find_next_zero_bit(bitmap->bitmap,
IDA_BITMAP_BITS, bit);

"bit" may not be 0 for the first round and "bit" is always 0 afterwords.
But where is the guaranteed that "end" is a multiple of IDA_BITMAP_BITS ?
Please explain why it is correct to use IDA_BITMAP_BITS unconditionally
for the last round.


There missed something here, it will be:

nbits = min(end - ida_start + 1, IDA_BITMAP_BITS - bit);
if (set)
ret = find_next_bit(bitmap->bitmap, nbits, bit);
else
ret = find_next_zero_bit(bitmap->bitmap,
   nbits, bit);
if (ret < nbits)
return ret + ida_start;



+/**
+ * xb_find_next_set_bit - find the next set bit in a range
+ * @xb: the xbitmap to search
+ * @start: the start of the range, inclusive
+ * @end: the end of the range, exclusive
+ *

Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-13 Thread Tetsuo Handa
Wei Wang wrote:
> On 12/12/2017 09:20 PM, Tetsuo Handa wrote:
> > Wei Wang wrote:
> >> +void xb_clear_bit_range(struct xb *xb, unsigned long start, unsigned long 
> >> end)
> >> +{
> >> +  struct radix_tree_root *root = >xbrt;
> >> +  struct radix_tree_node *node;
> >> +  void **slot;
> >> +  struct ida_bitmap *bitmap;
> >> +  unsigned int nbits;
> >> +
> >> +  for (; start < end; start = (start | (IDA_BITMAP_BITS - 1)) + 1) {
> >> +  unsigned long index = start / IDA_BITMAP_BITS;
> >> +  unsigned long bit = start % IDA_BITMAP_BITS;
> >> +
> >> +  bitmap = __radix_tree_lookup(root, index, , );
> >> +  if (radix_tree_exception(bitmap)) {
> >> +  unsigned long ebit = bit + 2;
> >> +  unsigned long tmp = (unsigned long)bitmap;
> >> +
> >> +  nbits = min(end - start + 1, BITS_PER_LONG - ebit);
> >> +
> >> +  if (ebit >= BITS_PER_LONG)
> > What happens if we hit this "continue;" when "index == ULONG_MAX / 
> > IDA_BITMAP_BITS" ?
> 
> Thanks. I also improved the test case for this. I plan to change the 
> implementation a little bit to avoid such overflow (has passed the test 
> case that I have, just post out for another set of eyes):
> 
> {
> ...
>  unsigned long idx = start / IDA_BITMAP_BITS;
>  unsigned long bit = start % IDA_BITMAP_BITS;
>  unsigned long idx_end = end / IDA_BITMAP_BITS;
>  unsigned long ret;
> 
>  for (idx = start / IDA_BITMAP_BITS; idx <= idx_end; idx++) {
>  unsigned long ida_start = idx * IDA_BITMAP_BITS;
> 
>  bitmap = __radix_tree_lookup(root, idx, , );
>  if (radix_tree_exception(bitmap)) {
>  unsigned long tmp = (unsigned long)bitmap;
>  unsigned long ebit = bit + 2;
> 
>  if (ebit >= BITS_PER_LONG)
>  continue;

Will you please please do eliminate exception path?
I can't interpret what "ebit >= BITS_PER_LONG" means.
The reason you "continue;" is that all bits beyond are "0", isn't it?
Then, it would make sense to "continue;" when finding next "1" because
all bits beyond are "0". But how does it make sense to "continue;" when
finding next "0" despite all bits beyond are "0"?

>  if (set)
>  ret = find_next_bit(, 
> BITS_PER_LONG, ebit);
>  else
>  ret = find_next_zero_bit(, 
> BITS_PER_LONG,
>   ebit);
>  if (ret < BITS_PER_LONG)
>  return ret - 2 + ida_start;
>  } else if (bitmap) {
>  if (set)
>  ret = find_next_bit(bitmap->bitmap,
>  IDA_BITMAP_BITS, bit);
>  else
>  ret = find_next_zero_bit(bitmap->bitmap,
> IDA_BITMAP_BITS, bit);

"bit" may not be 0 for the first round and "bit" is always 0 afterwords.
But where is the guaranteed that "end" is a multiple of IDA_BITMAP_BITS ?
Please explain why it is correct to use IDA_BITMAP_BITS unconditionally
for the last round.

>  if (ret < IDA_BITMAP_BITS)
>  return ret + ida_start;
>  } else if (!bitmap && !set) {

At this point bitmap == NULL is guaranteed. Thus, "!bitmap && " is pointless.

>  return bit + IDA_BITMAP_BITS * idx;
>  }
>  bit = 0;
>  }
> 
>  return end;
> }
> 
> 



> >
> >> +/**
> >> + * xb_find_next_set_bit - find the next set bit in a range
> >> + * @xb: the xbitmap to search
> >> + * @start: the start of the range, inclusive
> >> + * @end: the end of the range, exclusive
> >> + *
> >> + * Returns: the index of the found bit, or @end + 1 if no such bit is 
> >> found.
> >> + */
> >> +unsigned long xb_find_next_set_bit(struct xb *xb, unsigned long start,
> >> + unsigned long end)
> >> +{
> >> +  return xb_find_next_bit(xb, start, end, 1);
> >> +}
> > Won't "exclusive" loose ability to handle ULONG_MAX ? Since this is a
> > library module, missing ability to handle ULONG_MAX sounds like an omission.
> > Shouldn't we pass (or return) whether "found or not" flag (e.g. strtoul() in
> > C library function)?
> >
> >bool xb_find_next_set_bit(struct xb *xb, unsigned long start, unsigned 
> > long end, unsigned long *result);
> >unsigned long xb_find_next_set_bit(struct xb *xb, unsigned long start, 
> > unsigned long end, bool *found);
> 
> Yes, ULONG_MAX needs to be tested by xb_test_bit(). Compared to checking 
> the return value, would it be the same to let the caller check for the 
> ULONG_MAX boundary?
> 

Why the caller needs to care about whether it is ULONG_MAX or 

Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-13 Thread Wei Wang

On 12/12/2017 09:20 PM, Tetsuo Handa wrote:

Wei Wang wrote:

+void xb_clear_bit_range(struct xb *xb, unsigned long start, unsigned long end)
+{
+   struct radix_tree_root *root = >xbrt;
+   struct radix_tree_node *node;
+   void **slot;
+   struct ida_bitmap *bitmap;
+   unsigned int nbits;
+
+   for (; start < end; start = (start | (IDA_BITMAP_BITS - 1)) + 1) {
+   unsigned long index = start / IDA_BITMAP_BITS;
+   unsigned long bit = start % IDA_BITMAP_BITS;
+
+   bitmap = __radix_tree_lookup(root, index, , );
+   if (radix_tree_exception(bitmap)) {
+   unsigned long ebit = bit + 2;
+   unsigned long tmp = (unsigned long)bitmap;
+
+   nbits = min(end - start + 1, BITS_PER_LONG - ebit);
+
+   if (ebit >= BITS_PER_LONG)

What happens if we hit this "continue;" when "index == ULONG_MAX / 
IDA_BITMAP_BITS" ?


Thanks. I also improved the test case for this. I plan to change the 
implementation a little bit to avoid such overflow (has passed the test 
case that I have, just post out for another set of eyes):


{
...
unsigned long idx = start / IDA_BITMAP_BITS;
unsigned long bit = start % IDA_BITMAP_BITS;
unsigned long idx_end = end / IDA_BITMAP_BITS;
unsigned long ret;

for (idx = start / IDA_BITMAP_BITS; idx <= idx_end; idx++) {
unsigned long ida_start = idx * IDA_BITMAP_BITS;

bitmap = __radix_tree_lookup(root, idx, , );
if (radix_tree_exception(bitmap)) {
unsigned long tmp = (unsigned long)bitmap;
unsigned long ebit = bit + 2;

if (ebit >= BITS_PER_LONG)
continue;
if (set)
ret = find_next_bit(, 
BITS_PER_LONG, ebit);

else
ret = find_next_zero_bit(, 
BITS_PER_LONG,

 ebit);
if (ret < BITS_PER_LONG)
return ret - 2 + ida_start;
} else if (bitmap) {
if (set)
ret = find_next_bit(bitmap->bitmap,
IDA_BITMAP_BITS, bit);
else
ret = find_next_zero_bit(bitmap->bitmap,
IDA_BITMAP_BITS, bit);
if (ret < IDA_BITMAP_BITS)
return ret + ida_start;
} else if (!bitmap && !set) {
return bit + IDA_BITMAP_BITS * idx;
}
bit = 0;
}

return end;
}




Can you eliminate exception path and fold all xbitmap patches into one, and
post only one xbitmap patch without virtio-baloon changes? If exception path
is valuable, you can add exception path after minimum version is merged.
This series is too difficult for me to close corner cases.


That exception path is claimed to save memory, and I don't have a strong 
reason to remove that part.

Matthew, could we get your feedback on this?






+/**
+ * xb_find_next_set_bit - find the next set bit in a range
+ * @xb: the xbitmap to search
+ * @start: the start of the range, inclusive
+ * @end: the end of the range, exclusive
+ *
+ * Returns: the index of the found bit, or @end + 1 if no such bit is found.
+ */
+unsigned long xb_find_next_set_bit(struct xb *xb, unsigned long start,
+  unsigned long end)
+{
+   return xb_find_next_bit(xb, start, end, 1);
+}

Won't "exclusive" loose ability to handle ULONG_MAX ? Since this is a
library module, missing ability to handle ULONG_MAX sounds like an omission.
Shouldn't we pass (or return) whether "found or not" flag (e.g. strtoul() in
C library function)?

   bool xb_find_next_set_bit(struct xb *xb, unsigned long start, unsigned long 
end, unsigned long *result);
   unsigned long xb_find_next_set_bit(struct xb *xb, unsigned long start, 
unsigned long end, bool *found);


Yes, ULONG_MAX needs to be tested by xb_test_bit(). Compared to checking 
the return value, would it be the same to let the caller check for the 
ULONG_MAX boundary?


Best,
Wei





Re: [Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-12 Thread Tetsuo Handa
Wei Wang wrote:
> +void xb_clear_bit_range(struct xb *xb, unsigned long start, unsigned long 
> end)
> +{
> + struct radix_tree_root *root = >xbrt;
> + struct radix_tree_node *node;
> + void **slot;
> + struct ida_bitmap *bitmap;
> + unsigned int nbits;
> +
> + for (; start < end; start = (start | (IDA_BITMAP_BITS - 1)) + 1) {
> + unsigned long index = start / IDA_BITMAP_BITS;
> + unsigned long bit = start % IDA_BITMAP_BITS;
> +
> + bitmap = __radix_tree_lookup(root, index, , );
> + if (radix_tree_exception(bitmap)) {
> + unsigned long ebit = bit + 2;
> + unsigned long tmp = (unsigned long)bitmap;
> +
> + nbits = min(end - start + 1, BITS_PER_LONG - ebit);
> +
> + if (ebit >= BITS_PER_LONG)

What happens if we hit this "continue;" when "index == ULONG_MAX / 
IDA_BITMAP_BITS" ?

Can you eliminate exception path and fold all xbitmap patches into one, and
post only one xbitmap patch without virtio-baloon changes? If exception path
is valuable, you can add exception path after minimum version is merged.
This series is too difficult for me to close corner cases.

> + continue;
> + bitmap_clear(, ebit, nbits);
> + if (tmp == RADIX_TREE_EXCEPTIONAL_ENTRY)
> + __radix_tree_delete(root, node, slot);
> + else
> + rcu_assign_pointer(*slot, (void *)tmp);
> + } else if (bitmap) {
> + nbits = min(end - start + 1, IDA_BITMAP_BITS - bit);
> +
> + if (nbits != IDA_BITMAP_BITS)
> + bitmap_clear(bitmap->bitmap, bit, nbits);
> +
> + if (nbits == IDA_BITMAP_BITS ||
> + bitmap_empty(bitmap->bitmap, IDA_BITMAP_BITS)) {
> + kfree(bitmap);
> + __radix_tree_delete(root, node, slot);
> + }
> + }
> +
> + /*
> +  * Already reached the last usable ida bitmap, so just return,
> +  * otherwise overflow will happen.
> +  */
> + if (index == ULONG_MAX / IDA_BITMAP_BITS)
> + break;
> + }
> +}



> +/**
> + * xb_find_next_set_bit - find the next set bit in a range
> + * @xb: the xbitmap to search
> + * @start: the start of the range, inclusive
> + * @end: the end of the range, exclusive
> + *
> + * Returns: the index of the found bit, or @end + 1 if no such bit is found.
> + */
> +unsigned long xb_find_next_set_bit(struct xb *xb, unsigned long start,
> +unsigned long end)
> +{
> + return xb_find_next_bit(xb, start, end, 1);
> +}

Won't "exclusive" loose ability to handle ULONG_MAX ? Since this is a
library module, missing ability to handle ULONG_MAX sounds like an omission.
Shouldn't we pass (or return) whether "found or not" flag (e.g. strtoul() in
C library function)?

  bool xb_find_next_set_bit(struct xb *xb, unsigned long start, unsigned long 
end, unsigned long *result);
  unsigned long xb_find_next_set_bit(struct xb *xb, unsigned long start, 
unsigned long end, bool *found);



[Qemu-devel] [PATCH v19 3/7] xbitmap: add more operations

2017-12-12 Thread Wei Wang
This patch adds support to find next 1 or 0 bit in a xbmitmap range and
clear a range of bits.

More possible optimizations to add in the future:
1) xb_set_bit_range: set a range of bits.
2) when searching a bit, if the bit is not found in the slot, move on to
the next slot directly.
3) add tags to help searching.

Signed-off-by: Wei Wang 
Cc: Matthew Wilcox 
Cc: Andrew Morton 
Cc: Michal Hocko 
Cc: Michael S. Tsirkin 
Cc: Tetsuo Handa 
Suggested-by: Matthew Wilcox 
---
 include/linux/xbitmap.h  |   8 +-
 lib/xbitmap.c| 229 +++
 tools/include/linux/bitmap.h |  34 +++
 tools/include/linux/kernel.h |   2 +
 4 files changed, 272 insertions(+), 1 deletion(-)

diff --git a/include/linux/xbitmap.h b/include/linux/xbitmap.h
index b4d8375..eddf0d5e 100644
--- a/include/linux/xbitmap.h
+++ b/include/linux/xbitmap.h
@@ -33,8 +33,14 @@ static inline void xb_init(struct xb *xb)
 }
 
 int xb_set_bit(struct xb *xb, unsigned long bit);
+int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp);
 bool xb_test_bit(const struct xb *xb, unsigned long bit);
-int xb_clear_bit(struct xb *xb, unsigned long bit);
+void xb_clear_bit(struct xb *xb, unsigned long bit);
+unsigned long xb_find_next_set_bit(struct xb *xb, unsigned long start,
+  unsigned long end);
+unsigned long xb_find_next_zero_bit(struct xb *xb, unsigned long start,
+   unsigned long end);
+void xb_clear_bit_range(struct xb *xb, unsigned long start, unsigned long end);
 
 static inline bool xb_empty(const struct xb *xb)
 {
diff --git a/lib/xbitmap.c b/lib/xbitmap.c
index 182aa29..10df879 100644
--- a/lib/xbitmap.c
+++ b/lib/xbitmap.c
@@ -3,6 +3,13 @@
 #include 
 #include 
 
+/*
+ * Developer notes: locks are required to gurantee there is no concurrent
+ * calls of xb_set_bit, xb_clear_bit, xb_clear_bit_range, xb_test_bit,
+ * xb_find_next_set_bit, or xb_find_next_clear_bit to operate on the same
+ * ida bitamp.
+ */
+
 /**
  *  xb_set_bit - set a bit in the xbitmap
  *  @xb: the xbitmap tree used to record the bit
@@ -70,6 +77,28 @@ int xb_set_bit(struct xb *xb, unsigned long bit)
 EXPORT_SYMBOL(xb_set_bit);
 
 /**
+ *  xb_preload_and_set_bit - preload the memory and set a bit in the xbitmap
+ *  @xb: the xbitmap tree used to record the bit
+ *  @bit: index of the bit to set
+ *
+ * A wrapper of the xb_preload() and xb_set_bit().
+ * Returns: 0 on success; -EAGAIN or -ENOMEM on error.
+ */
+int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp)
+{
+   int ret = 0;
+
+   if (!xb_preload(gfp))
+   return -ENOMEM;
+
+   ret = xb_set_bit(xb, bit);
+   xb_preload_end();
+
+   return ret;
+}
+EXPORT_SYMBOL(xb_preload_and_set_bit);
+
+/**
  * xb_clear_bit - clear a bit in the xbitmap
  * @xb: the xbitmap tree used to record the bit
  * @bit: index of the bit to clear
@@ -115,6 +144,63 @@ void xb_clear_bit(struct xb *xb, unsigned long bit)
 EXPORT_SYMBOL(xb_clear_bit);
 
 /**
+ * xb_clear_bit_range - clear a range of bits in the xbitmap
+ * @start: the start of the bit range, inclusive
+ * @end: the end of the bit range, exclusive
+ *
+ * This function is used to clear a bit in the xbitmap. If all the bits of the
+ * bitmap are 0, the bitmap will be freed.
+ */
+void xb_clear_bit_range(struct xb *xb, unsigned long start, unsigned long end)
+{
+   struct radix_tree_root *root = >xbrt;
+   struct radix_tree_node *node;
+   void **slot;
+   struct ida_bitmap *bitmap;
+   unsigned int nbits;
+
+   for (; start < end; start = (start | (IDA_BITMAP_BITS - 1)) + 1) {
+   unsigned long index = start / IDA_BITMAP_BITS;
+   unsigned long bit = start % IDA_BITMAP_BITS;
+
+   bitmap = __radix_tree_lookup(root, index, , );
+   if (radix_tree_exception(bitmap)) {
+   unsigned long ebit = bit + 2;
+   unsigned long tmp = (unsigned long)bitmap;
+
+   nbits = min(end - start + 1, BITS_PER_LONG - ebit);
+
+   if (ebit >= BITS_PER_LONG)
+   continue;
+   bitmap_clear(, ebit, nbits);
+   if (tmp == RADIX_TREE_EXCEPTIONAL_ENTRY)
+   __radix_tree_delete(root, node, slot);
+   else
+   rcu_assign_pointer(*slot, (void *)tmp);
+   } else if (bitmap) {
+   nbits = min(end - start + 1, IDA_BITMAP_BITS - bit);
+
+   if (nbits != IDA_BITMAP_BITS)
+   bitmap_clear(bitmap->bitmap, bit, nbits);
+
+   if (nbits == IDA_BITMAP_BITS ||
+