Le 27/04/2024 à 18:45, Erick Archer a écrit :
This is an effort to get rid of all multiplications from allocation
functions in order to prevent integer overflows [1].
Here the multiplication is obviously safe. However, using kcalloc*()
is more appropriate [2] and improves readability. This patch has no
effect on runtime behavior.
Link: https://github.com/KSPP/linux/issues/162 [1]
Link:
https://www.kernel.org/doc/html/next/process/deprecated.html#open-coded-arithmetic-in-allocator-arguments
[2]
Reviewed-by: Gustavo A. R. Silva <[email protected]>
Signed-off-by: Erick Archer <[email protected]>
---
Changes in v3:
- Update the commit message to better explain the changes.
- Rebase against linux-next.
Changes in v2:
- Add the "Reviewed-by:" tag.
- Rebase against linux-next.
Previous versions:
v1 ->
https://lore.kernel.org/linux-hardening/[email protected]
v2 ->
https://lore.kernel.org/linux-hardening/as8pr02mb7237a07d73d6d15ebf72fd8d8b...@as8pr02mb7237.eurprd02.prod.outlook.com
Hi,
This is a new try. In the v2 version Ingo explained that this change
is nonsense since kzalloc() is a perfectly usable interface and there
is no real overflow here.
Anyway, if we have the 2-factor form of the allocator, I think it is
a good practice to use it.
In this version I have updated the commit message to explain that
the code is obviusly safe in contrast with the last version where the
impression was given that there was a real overlow bug.
I hope this patch can be applied this time.
Regards,
Erick
---
arch/x86/events/amd/uncore.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
index 4ccb8fa483e6..61c0a2114183 100644
--- a/arch/x86/events/amd/uncore.c
+++ b/arch/x86/events/amd/uncore.c
@@ -479,8 +479,8 @@ static int amd_uncore_ctx_init(struct amd_uncore *uncore,
unsigned int cpu)
goto fail;
curr->cpu = cpu;
- curr->events = kzalloc_node(sizeof(*curr->events) *
- pmu->num_counters,
+ curr->events = kcalloc_node(pmu->num_counters,
+ sizeof(*curr->events),
GFP_KERNEL, node);
Hi,
not related to your patch, but amd_uncore_ctx looks like a good
candidate for a flexible array.
This would simplify allocation/freeing of the memory.
struct amd_uncore_ctx {
int refcnt;
int cpu;
struct hlist_node node;
struct perf_event *events[];
};
Just my 2c,
CJ
if (!curr->events) {
kfree(curr);
@@ -928,7 +928,7 @@ int amd_uncore_umc_ctx_init(struct amd_uncore *uncore,
unsigned int cpu)
uncore->num_pmus += group_num_pmus[gid];
}
- uncore->pmus = kzalloc(sizeof(*uncore->pmus) * uncore->num_pmus,
+ uncore->pmus = kcalloc(uncore->num_pmus, sizeof(*uncore->pmus),
GFP_KERNEL);
if (!uncore->pmus) {
uncore->num_pmus = 0;