Hi!

Thanks for the review

On 9/12/23 21:29, Vladimir Sementsov-Ogievskiy wrote:
On 04.09.23 11:31, Andrey Zhadchenko wrote:
Unlike other transaction commands, bitmap operations do not drain target
bds. If we have an IOThread, this may result in some inconsistencies, as
bitmap content may change during transaction command.
Add bdrv_drained_begin()/end() to bitmap operations.

Signed-off-by: Andrey Zhadchenko<andrey.zhadche...@virtuozzo.com>

Hi!

First, please always include cover letter when more than 1 patch.

Next. Hmm. Good idea, but I'm afraid that's still not enough.

Assume you have two BSs A and B in two different iothreads. So, the sequence may be like this:

1. drain_begin A

2. do operation with bitmap in A

3. guest writes to B, B is modified and bitmap in B is modified as well

4. drain_begin B

5. do operation with bitmap in B

6. drain_end B

7. drain_end A

User may expect, that all the operations are done atomically in relation to any guest IO operations. And if operations are dependent, the intermediate write [3] make break the result.

I see your point, but can the difference really be observed in this case? It is probably only relevant if user can copy/merge bitmaps between block nodes (as far as I know this is not the case for now)


So, we should drain all participant drives during the whole transactions. The simplest solution is bdrv_drain_all_begin() / bdrv_drain_all_end() pair in qmp_transaction(), could we start with it?


That would definitely get rig of all problems, but I do not really like the idea of draining unrelated nodes.

What do you think if I add a new function prototype to the TransactionActionDrv, which will return transaction bds? Then we can get a list of all bds and lock them before prepairing and clean afterwards.

Reply via email to