Probably s/reqs_avail_batch/reqs_avail/ for better readability as in this context batch-es are used to migrate from per-cpu counter to global counter, but you don't change counters here.

On 4/23/21 1:57 PM, Alexander Mikhalitsyn wrote:
We have to take into account percpu part of reqs_available
counter on struct kioctx.

Fixes: f5d1279 ("ve/aio: Add a handle to checkpoint/restore AIO context")

https://jira.sw.ru/browse/PSBM-128710


Reviewed-by: Pavel Tikhomirov <ptikhomi...@virtuozzo.com>

Signed-off-by: Alexander Mikhalitsyn <alexander.mikhalit...@virtuozzo.com>
---
  fs/aio.c | 12 +++++++++++-
  1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/fs/aio.c b/fs/aio.c
index 7c547247b056..4ae0cac0f3ff 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -1898,9 +1898,19 @@ static bool has_reqs_active(struct kioctx *ctx)
  {
        unsigned long flags;
        unsigned nr;
+       int cpu;
+       unsigned reqs_avail_batch = 0;
spin_lock_irqsave(&ctx->completion_lock, flags);
-       nr = (ctx->nr_events - 1) - atomic_read(&ctx->reqs_available);
+       /*
+        * See get_reqs_available()/put_reqs_available() about
+        * how reqs_available distributed between atomic
+        * ctx->reqs_available and percpu ctx->cpu reqs_available.
+        */
+       for_each_possible_cpu(cpu)
+               reqs_avail_batch += per_cpu_ptr(ctx->cpu, cpu)->reqs_available;
+       nr = ctx->nr_events - 1;
+       nr -= atomic_read(&ctx->reqs_available) + reqs_avail_batch;
        nr -= ctx->completed_events;
        spin_unlock_irqrestore(&ctx->completion_lock, flags);

--
Best regards, Tikhomirov Pavel
Software Developer, Virtuozzo.
_______________________________________________
Devel mailing list
Devel@openvz.org
https://lists.openvz.org/mailman/listinfo/devel

Reply via email to