Hi Tanmay,
I just now see, that you added some more comments later in your
reply. Please find some comment below...
On 12/10/25 19:28, Tanmay Shah wrote:
<snip>
/**
@@ -1154,6 +1164,7 @@ static void zynqmp_r5_cluster_exit(void *data)
for (i = 0; i < cluster->core_count; i++) {
r5_core = cluster->r5_cores[i];
+ cancel_work_sync(&r5_core->ipi->mbox_work);
I see merge-conflict on top of the for-next branch. Please rebase the
patch on top of the for-next branch: https://git.kernel.org/pub/scm/
linux/kernel/git/remoteproc/linux.git/log/?h=for-next
Will do in v4.
zynqmp_r5_free_mbox(r5_core->ipi);
of_reserved_mem_device_release(r5_core->dev);
put_device(r5_core->dev);
@@ -1162,6 +1173,7 @@ static void zynqmp_r5_cluster_exit(void *data)
}
kfree(cluster->r5_cores);
+ destroy_workqueue(cluster->workqueue);
kfree(cluster);
platform_set_drvdata(pdev, NULL);
}
@@ -1194,11 +1206,20 @@ static int
zynqmp_r5_remoteproc_probe(struct platform_device *pdev)
return ret;
}
+ cluster->workqueue = alloc_workqueue(dev_name(dev),
+ WQ_UNBOUND | WQ_HIGHPRI, 0);
+ if (!cluster->workqueue) {
+ dev_err_probe(dev, -ENOMEM, "cannot create workqueue\n");
+ kfree(cluster);
+ return -ENOMEM;
+ }
+
Workqueue will be unused if mbox properties are not mentioned in the
device-tree. So, we need to allocate workqueue only if IPI is setup for
at least one core. I think following logic should work:
Make decision if workqueue is needed or not, if zynqmp_r5_setup_mbox()
function is passing for atleast one core. If zynqmp_r5_setup_mbox() is
success, then set a flag to allocate workqueue, and then later right
before calling zynqmp_r5_core_init() allocate the workqueue for the
cluster.
Good idea. I'll gladly enhance this in the next patch version, if you
(still) agree that this patch is a valid solution for our use-case,
after reviewing my reply with the r5 side.
Remoteproc can be used only to load() and start() stop() fw, and RPMsg
can be optional.
Also, before calling destroy_workqueue make sure to have NULL check and
destroy only if it was allocated.
Ack.
Thanks,
Stefan