Re: [PULL 01/19] util/hbitmap: strict hbitmap_reset

2019-10-15 Thread John Snow



On 10/15/19 4:44 AM, Kevin Wolf wrote:
> Am 14.10.2019 um 20:10 hat John Snow geschrieben:
>>
>>
>> On 10/11/19 7:18 PM, John Snow wrote:
>>>
>>>
>>> On 10/11/19 5:48 PM, Eric Blake wrote:
 On 10/11/19 4:25 PM, John Snow wrote:
> From: Vladimir Sementsov-Ogievskiy 
>
> hbitmap_reset has an unobvious property: it rounds requested region up.
> It may provoke bugs, like in recently fixed write-blocking mode of
> mirror: user calls reset on unaligned region, not keeping in mind that
> there are possible unrelated dirty bytes, covered by rounded-up region
> and information of this unrelated "dirtiness" will be lost.
>
> Make hbitmap_reset strict: assert that arguments are aligned, allowing
> only one exception when @start + @count == hb->orig_size. It's needed
> to comfort users of hbitmap_next_dirty_area, which cares about
> hb->orig_size.
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy 
> Reviewed-by: Max Reitz 
> Message-Id: <20190806152611.280389-1-vsement...@virtuozzo.com>
> [Maintainer edit: Max's suggestions from on-list. --js]
> Signed-off-by: John Snow 
> ---
>   include/qemu/hbitmap.h | 5 +
>   tests/test-hbitmap.c   | 2 +-
>   util/hbitmap.c | 4 
>   3 files changed, 10 insertions(+), 1 deletion(-)
>

> +++ b/util/hbitmap.c
> @@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start,
> uint64_t count)
>   /* Compute range in the last layer.  */
>   uint64_t first;
>   uint64_t last = start + count - 1;
> +    uint64_t gran = 1ULL << hb->granularity;
> +
> +    assert(!(start & (gran - 1)));
> +    assert(!(count & (gran - 1)) || (start + count == hb->orig_size));

 I know I'm replying a bit late (since this is now a pull request), but
 would it be worth using the dedicated macro:

 assert(QEMU_IS_ALIGNED(start, gran));
 assert(QEMU_IS_ALIGNED(count, gran) || start + count == hb->orig_size);

 instead of open-coding it?  (I would also drop the extra () around the
 right half of ||). If we want it, that would now be a followup patch.
>>
>> I've noticed that seasoned C programmers hate extra parentheses a lot.
>> I've noticed that I cannot remember operator precedence enough to ever
>> feel like this is actually an improvement.
>>
>> Something about a nice weighted tree of ((expr1) || (expr2)) feels
>> soothing to my weary eyes. So, if it's not terribly important, I'd
>> prefer to leave it as-is.
> 
> I don't mind the parentheses, but I do prefer QEMU_IS_ALIGNED() to the
> open-coded version. Would that be a viable compromise?
> 

Oh, I'm sorry! I did change that. I didn't mean to appear any more
stubborn than I actually am.

--js



Re: [PULL 01/19] util/hbitmap: strict hbitmap_reset

2019-10-15 Thread Kevin Wolf
Am 14.10.2019 um 20:10 hat John Snow geschrieben:
> 
> 
> On 10/11/19 7:18 PM, John Snow wrote:
> > 
> > 
> > On 10/11/19 5:48 PM, Eric Blake wrote:
> >> On 10/11/19 4:25 PM, John Snow wrote:
> >>> From: Vladimir Sementsov-Ogievskiy 
> >>>
> >>> hbitmap_reset has an unobvious property: it rounds requested region up.
> >>> It may provoke bugs, like in recently fixed write-blocking mode of
> >>> mirror: user calls reset on unaligned region, not keeping in mind that
> >>> there are possible unrelated dirty bytes, covered by rounded-up region
> >>> and information of this unrelated "dirtiness" will be lost.
> >>>
> >>> Make hbitmap_reset strict: assert that arguments are aligned, allowing
> >>> only one exception when @start + @count == hb->orig_size. It's needed
> >>> to comfort users of hbitmap_next_dirty_area, which cares about
> >>> hb->orig_size.
> >>>
> >>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
> >>> Reviewed-by: Max Reitz 
> >>> Message-Id: <20190806152611.280389-1-vsement...@virtuozzo.com>
> >>> [Maintainer edit: Max's suggestions from on-list. --js]
> >>> Signed-off-by: John Snow 
> >>> ---
> >>>   include/qemu/hbitmap.h | 5 +
> >>>   tests/test-hbitmap.c   | 2 +-
> >>>   util/hbitmap.c | 4 
> >>>   3 files changed, 10 insertions(+), 1 deletion(-)
> >>>
> >>
> >>> +++ b/util/hbitmap.c
> >>> @@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start,
> >>> uint64_t count)
> >>>   /* Compute range in the last layer.  */
> >>>   uint64_t first;
> >>>   uint64_t last = start + count - 1;
> >>> +    uint64_t gran = 1ULL << hb->granularity;
> >>> +
> >>> +    assert(!(start & (gran - 1)));
> >>> +    assert(!(count & (gran - 1)) || (start + count == hb->orig_size));
> >>
> >> I know I'm replying a bit late (since this is now a pull request), but
> >> would it be worth using the dedicated macro:
> >>
> >> assert(QEMU_IS_ALIGNED(start, gran));
> >> assert(QEMU_IS_ALIGNED(count, gran) || start + count == hb->orig_size);
> >>
> >> instead of open-coding it?  (I would also drop the extra () around the
> >> right half of ||). If we want it, that would now be a followup patch.
> 
> I've noticed that seasoned C programmers hate extra parentheses a lot.
> I've noticed that I cannot remember operator precedence enough to ever
> feel like this is actually an improvement.
> 
> Something about a nice weighted tree of ((expr1) || (expr2)) feels
> soothing to my weary eyes. So, if it's not terribly important, I'd
> prefer to leave it as-is.

I don't mind the parentheses, but I do prefer QEMU_IS_ALIGNED() to the
open-coded version. Would that be a viable compromise?

Kevin



Re: [PULL 01/19] util/hbitmap: strict hbitmap_reset

2019-10-14 Thread John Snow



On 10/11/19 7:18 PM, John Snow wrote:
> 
> 
> On 10/11/19 5:48 PM, Eric Blake wrote:
>> On 10/11/19 4:25 PM, John Snow wrote:
>>> From: Vladimir Sementsov-Ogievskiy 
>>>
>>> hbitmap_reset has an unobvious property: it rounds requested region up.
>>> It may provoke bugs, like in recently fixed write-blocking mode of
>>> mirror: user calls reset on unaligned region, not keeping in mind that
>>> there are possible unrelated dirty bytes, covered by rounded-up region
>>> and information of this unrelated "dirtiness" will be lost.
>>>
>>> Make hbitmap_reset strict: assert that arguments are aligned, allowing
>>> only one exception when @start + @count == hb->orig_size. It's needed
>>> to comfort users of hbitmap_next_dirty_area, which cares about
>>> hb->orig_size.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>>> Reviewed-by: Max Reitz 
>>> Message-Id: <20190806152611.280389-1-vsement...@virtuozzo.com>
>>> [Maintainer edit: Max's suggestions from on-list. --js]
>>> Signed-off-by: John Snow 
>>> ---
>>>   include/qemu/hbitmap.h | 5 +
>>>   tests/test-hbitmap.c   | 2 +-
>>>   util/hbitmap.c | 4 
>>>   3 files changed, 10 insertions(+), 1 deletion(-)
>>>
>>
>>> +++ b/util/hbitmap.c
>>> @@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start,
>>> uint64_t count)
>>>   /* Compute range in the last layer.  */
>>>   uint64_t first;
>>>   uint64_t last = start + count - 1;
>>> +    uint64_t gran = 1ULL << hb->granularity;
>>> +
>>> +    assert(!(start & (gran - 1)));
>>> +    assert(!(count & (gran - 1)) || (start + count == hb->orig_size));
>>
>> I know I'm replying a bit late (since this is now a pull request), but
>> would it be worth using the dedicated macro:
>>
>> assert(QEMU_IS_ALIGNED(start, gran));
>> assert(QEMU_IS_ALIGNED(count, gran) || start + count == hb->orig_size);
>>
>> instead of open-coding it?  (I would also drop the extra () around the
>> right half of ||). If we want it, that would now be a followup patch.

I've noticed that seasoned C programmers hate extra parentheses a lot.
I've noticed that I cannot remember operator precedence enough to ever
feel like this is actually an improvement.

Something about a nice weighted tree of ((expr1) || (expr2)) feels
soothing to my weary eyes. So, if it's not terribly important, I'd
prefer to leave it as-is.

(You may feel free to counter-educate me as desired.)

>>
> 
> If the PR doesn't make it for some reason, I can amend a cleanup patch
> for the next PR.
> 

by the way: GOOD NEWS! ...

--js



Re: [PULL 01/19] util/hbitmap: strict hbitmap_reset

2019-10-11 Thread John Snow



On 10/11/19 5:48 PM, Eric Blake wrote:
> On 10/11/19 4:25 PM, John Snow wrote:
>> From: Vladimir Sementsov-Ogievskiy 
>>
>> hbitmap_reset has an unobvious property: it rounds requested region up.
>> It may provoke bugs, like in recently fixed write-blocking mode of
>> mirror: user calls reset on unaligned region, not keeping in mind that
>> there are possible unrelated dirty bytes, covered by rounded-up region
>> and information of this unrelated "dirtiness" will be lost.
>>
>> Make hbitmap_reset strict: assert that arguments are aligned, allowing
>> only one exception when @start + @count == hb->orig_size. It's needed
>> to comfort users of hbitmap_next_dirty_area, which cares about
>> hb->orig_size.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy 
>> Reviewed-by: Max Reitz 
>> Message-Id: <20190806152611.280389-1-vsement...@virtuozzo.com>
>> [Maintainer edit: Max's suggestions from on-list. --js]
>> Signed-off-by: John Snow 
>> ---
>>   include/qemu/hbitmap.h | 5 +
>>   tests/test-hbitmap.c   | 2 +-
>>   util/hbitmap.c | 4 
>>   3 files changed, 10 insertions(+), 1 deletion(-)
>>
> 
>> +++ b/util/hbitmap.c
>> @@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start,
>> uint64_t count)
>>   /* Compute range in the last layer.  */
>>   uint64_t first;
>>   uint64_t last = start + count - 1;
>> +    uint64_t gran = 1ULL << hb->granularity;
>> +
>> +    assert(!(start & (gran - 1)));
>> +    assert(!(count & (gran - 1)) || (start + count == hb->orig_size));
> 
> I know I'm replying a bit late (since this is now a pull request), but
> would it be worth using the dedicated macro:
> 
> assert(QEMU_IS_ALIGNED(start, gran));
> assert(QEMU_IS_ALIGNED(count, gran) || start + count == hb->orig_size);
> 
> instead of open-coding it?  (I would also drop the extra () around the
> right half of ||). If we want it, that would now be a followup patch.
> 

If the PR doesn't make it for some reason, I can amend a cleanup patch
for the next PR.

--js



Re: [PULL 01/19] util/hbitmap: strict hbitmap_reset

2019-10-11 Thread Eric Blake

On 10/11/19 4:25 PM, John Snow wrote:

From: Vladimir Sementsov-Ogievskiy 

hbitmap_reset has an unobvious property: it rounds requested region up.
It may provoke bugs, like in recently fixed write-blocking mode of
mirror: user calls reset on unaligned region, not keeping in mind that
there are possible unrelated dirty bytes, covered by rounded-up region
and information of this unrelated "dirtiness" will be lost.

Make hbitmap_reset strict: assert that arguments are aligned, allowing
only one exception when @start + @count == hb->orig_size. It's needed
to comfort users of hbitmap_next_dirty_area, which cares about
hb->orig_size.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: Max Reitz 
Message-Id: <20190806152611.280389-1-vsement...@virtuozzo.com>
[Maintainer edit: Max's suggestions from on-list. --js]
Signed-off-by: John Snow 
---
  include/qemu/hbitmap.h | 5 +
  tests/test-hbitmap.c   | 2 +-
  util/hbitmap.c | 4 
  3 files changed, 10 insertions(+), 1 deletion(-)




+++ b/util/hbitmap.c
@@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t 
count)
  /* Compute range in the last layer.  */
  uint64_t first;
  uint64_t last = start + count - 1;
+uint64_t gran = 1ULL << hb->granularity;
+
+assert(!(start & (gran - 1)));
+assert(!(count & (gran - 1)) || (start + count == hb->orig_size));


I know I'm replying a bit late (since this is now a pull request), but 
would it be worth using the dedicated macro:


assert(QEMU_IS_ALIGNED(start, gran));
assert(QEMU_IS_ALIGNED(count, gran) || start + count == hb->orig_size);

instead of open-coding it?  (I would also drop the extra () around the 
right half of ||). If we want it, that would now be a followup patch.


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.   +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



[PULL 01/19] util/hbitmap: strict hbitmap_reset

2019-10-11 Thread John Snow
From: Vladimir Sementsov-Ogievskiy 

hbitmap_reset has an unobvious property: it rounds requested region up.
It may provoke bugs, like in recently fixed write-blocking mode of
mirror: user calls reset on unaligned region, not keeping in mind that
there are possible unrelated dirty bytes, covered by rounded-up region
and information of this unrelated "dirtiness" will be lost.

Make hbitmap_reset strict: assert that arguments are aligned, allowing
only one exception when @start + @count == hb->orig_size. It's needed
to comfort users of hbitmap_next_dirty_area, which cares about
hb->orig_size.

Signed-off-by: Vladimir Sementsov-Ogievskiy 
Reviewed-by: Max Reitz 
Message-Id: <20190806152611.280389-1-vsement...@virtuozzo.com>
[Maintainer edit: Max's suggestions from on-list. --js]
Signed-off-by: John Snow 
---
 include/qemu/hbitmap.h | 5 +
 tests/test-hbitmap.c   | 2 +-
 util/hbitmap.c | 4 
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/include/qemu/hbitmap.h b/include/qemu/hbitmap.h
index 4afbe6292e3..1bf944ca3d1 100644
--- a/include/qemu/hbitmap.h
+++ b/include/qemu/hbitmap.h
@@ -132,6 +132,11 @@ void hbitmap_set(HBitmap *hb, uint64_t start, uint64_t 
count);
  * @count: Number of bits to reset.
  *
  * Reset a consecutive range of bits in an HBitmap.
+ * @start and @count must be aligned to bitmap granularity. The only exception
+ * is resetting the tail of the bitmap: @count may be equal to hb->orig_size -
+ * @start, in this case @count may be not aligned. The sum of @start + @count 
is
+ * allowed to be greater than hb->orig_size, but only if @start < hb->orig_size
+ * and @start + @count = ALIGN_UP(hb->orig_size, granularity).
  */
 void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t count);
 
diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
index eed5d288cbc..e1f867085f4 100644
--- a/tests/test-hbitmap.c
+++ b/tests/test-hbitmap.c
@@ -423,7 +423,7 @@ static void test_hbitmap_granularity(TestHBitmapData *data,
 hbitmap_test_check(data, 0);
 hbitmap_test_set(data, 0, 3);
 g_assert_cmpint(hbitmap_count(data->hb), ==, 4);
-hbitmap_test_reset(data, 0, 1);
+hbitmap_test_reset(data, 0, 2);
 g_assert_cmpint(hbitmap_count(data->hb), ==, 2);
 }
 
diff --git a/util/hbitmap.c b/util/hbitmap.c
index fd44c897ab0..757d39e360a 100644
--- a/util/hbitmap.c
+++ b/util/hbitmap.c
@@ -476,6 +476,10 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, uint64_t 
count)
 /* Compute range in the last layer.  */
 uint64_t first;
 uint64_t last = start + count - 1;
+uint64_t gran = 1ULL << hb->granularity;
+
+assert(!(start & (gran - 1)));
+assert(!(count & (gran - 1)) || (start + count == hb->orig_size));
 
 trace_hbitmap_reset(hb, start, count,
 start >> hb->granularity, last >> hb->granularity);
-- 
2.21.0