On Mon, Mar 12, 2018 at 03:10:47PM +0100, Maciej S. Szmigiero wrote:
> And this current maximum was reached by CPU types added in
> families < 15h during last 10+ years (the oldest supported CPU family in

You're assuming that the rate of adding patches to the microcode
container won't change. You have a crystal ball which shows you the
future?

Ok, enough with the bullshit.

Here's what I'll take as hardening patches:

1. Check whether the equivalence table length is not exceeding the size
of the whole blob. This is the only sane limit check we can do - no
arbitrary bullshit of it'll take how many years to reach some limit.

2. Add a PATCH_MAX_SIZE macro which evaluates against the max of all
family patch sizes:

#define F1XH_MPB_MAX_SIZE 2048
#define F14H_MPB_MAX_SIZE 1824
#define F15H_MPB_MAX_SIZE 4096
#define F16H_MPB_MAX_SIZE 3458
#define F17H_MPB_MAX_SIZE 3200

so that future additions won't break the macro.

3. Fix install_equiv_cpu_table() to return an unsigned int

Make all the points above into separate patches, please, with proper
commit messages explaining why they do what they do and test them.

Thx.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

Reply via email to