On 12/30/24 15:12, Alexander Atanasov wrote:
On 24.12.24 12:13, Pavel Tikhomirov wrote:
I'll reply here to all the comments.
- kmap_atomic creates a temporary mapping on the current CPU only and we
want to create permanent mappings for the pages.
- kmap_atomic is deprecated - we want to avoid it.
agreed
- we want to use READ/WRITE_ONCE everywhere since we try to avoid taking
locks. may be it is a good idea to move it to a different patch
So it looks like d_bat_entries, bat_entries and bat_levels can change
concurrently, thus we use, *_ONCE to keep them stable in one scope. OK.
- this places are not atomic context but they are creating a non-
preemptive one, that's the side effect of kmap_atomic. code is used when
loading deltas, i.e. not performance critical
The function ploop_prepare_bat_update is always called under
spin_lock_irqsave(&ploop->bat_lock), that is atomic context, one can not
sleep there. So calling kmap looks invalid there.
On 12/6/24 05:55, Alexander Atanasov wrote:
md_page is always present in memory. In that case
md_page->page could be always be mapped and we would not need to perform
kmap_atomic/kunmap_atomic during each lookup
https://virtuozzo.atlassian.net/browse/VSTOR-91659
Suggested-by: Denis V. Lunev <d...@openvz.org>
Signed-off-by: Alexander Atanasov <alexander.atana...@virtuozzo.com>
---
drivers/md/dm-ploop-bat.c | 21 ++++++++-------------
drivers/md/dm-ploop-cmd.c | 33 ++++++++++++---------------------
drivers/md/dm-ploop-map.c | 35 +++++++++++++----------------------
drivers/md/dm-ploop.h | 14 +++++++-------
4 files changed, 40 insertions(+), 63 deletions(-)
...
@@ -265,9 +263,9 @@ static int ploop_write_zero_cluster_sync(struct
ploop *ploop,
int i;
for (i = 0; i < pio->bi_vcnt; i++) {
- data = kmap_atomic(pio->bi_io_vec[i].bv_page);
+ data = kmap(pio->bi_io_vec[i].bv_page);
This hunk seems unrelated to original md->kmpage idea and just does
kmap_atomic -> kmap replacement. Should it go to separate patch?
Should we prove in commit message that it is safe to call kmap here
(non-atomic context)?
memset(data, 0, PAGE_SIZE);
- kunmap_atomic(data);
+ kunmap(data);
}
return ploop_write_cluster_sync(ploop, pio, clu);
...
@@ -909,9 +908,10 @@ static int ploop_prepare_bat_update(struct ploop
*ploop, struct md_page *md,
piwb->pio = pio = ploop_alloc_pio(ploop, GFP_NOIO);
if (!page || !pio)
goto err;
+ piwb->kmpage = kmap(piwb->bat_page);
Same question why it is safe to switch kmap_atomic to kmap?
ploop_init_pio(ploop, REQ_OP_WRITE, pio);
- bat_entries = kmap_atomic(md->page);
+ bat_entries = md->kmpage;
write_lock_irq(&ploop->bat_rwlock);
md->piwb = piwb;
...
@@ -759,14 +754,12 @@ static void notify_delta_merged(struct ploop
*ploop, u8 level,
* 1)next delta (which became renumbered) or
* 2)prev delta (if !@forward).
*/
- bat_entries[i] = d_bat_entries[i];
+ WRITE_ONCE(bat_entries[i],
READ_ONCE(d_bat_entries[i]));
if (!forward)
md->bat_levels[i] = level - 1;
else
md->bat_levels[i] = level;
}
- kunmap_atomic(bat_entries);
- kunmap_atomic(d_bat_entries);
if (stop)
break;
d_md = ploop_md_next_entry(d_md);
...
@@ -463,11 +464,10 @@ static inline bool
ploop_md_page_cluster_is_in_top_delta(struct ploop *ploop,
return false;
}
- bat_entries = kmap_atomic(md->page);
- if (bat_entries[clu] == BAT_ENTRY_NONE ||
- md->bat_levels[clu] < ploop_top_level(ploop))
+ bat_entries = md->kmpage;
+ if (READ_ONCE(bat_entries[clu]) == BAT_ENTRY_NONE ||
+ READ_ONCE(md->bat_levels[clu]) < ploop_top_level(ploop))
Should we explain in commit message why we need to use READ_ONCE/
WRITE_ONCE above?
ret = false;
- kunmap_atomic(bat_entries);
return ret;
}
--
Best regards, Tikhomirov Pavel
Senior Software Developer, Virtuozzo.
_______________________________________________
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel