>> The in-stack randomization is really a very small change both code wise and
>> logic wise.
>> It does not affect real workloads and does not require enablement of other
>> features (such as GCC plugins).
>> So, I think we should really reconsider its inclusion.
>I'd agree: the code is tiny and
On Mon, Jul 29, 2019 at 11:41:11AM +, Reshetova, Elena wrote:
> I want to summarize here the data (including the performance numbers)
> and reasoning for the in-stack randomization feature. I have organized
> it in a simple set of Q below.
Thanks for these!
> The in-stack randomization is
Ingo, Andy,
I want to summarize here the data (including the performance numbers)
and reasoning for the in-stack randomization feature. I have organized
it in a simple set of Q below.
Q: Why do we need in-stack per-syscall randomization when we already have
all known attack vectors covered with
On Wed, May 29, 2019 at 10:13:43AM +, Reshetova, Elena wrote:
> On related note: the current prng we have in kernel (prandom) is based on a
> *very old* style of prngs, which is basically 4 linear LFSRs xored together.
> Nowadays, we have much more powerful prngs that show much better
>
On Wed, May 29, 2019 at 10:13:43AM +, Reshetova, Elena wrote:
> Not sure about ideal params for the whole combination here since security
> and performance are basically conflicting with each other (as usual).
> So, that's why I was trying to propose to have two version of this:
> - one with
From: Reshetova, Elena
> Sent: 29 May 2019 11:14
> On related note: the current prng we have in kernel (prandom) is based on a
> *very old* style of prngs, which is basically 4 linear LFSRs xored together.
I'm no expert here (apart from some knowledge of LFRS/CRC) but
even adding the results
> I confess I've kind of lost the plot on the performance requirements
> at this point. Instead of measuring and evaluating potential
> solutions, can we try to approach this from the opposite direction and
> ask what the requirements are?
>
> What's the maximum number of CPU cycles that we are
I confess I've kind of lost the plot on the performance requirements
at this point. Instead of measuring and evaluating potential
solutions, can we try to approach this from the opposite direction and
ask what the requirements are?
What's the maximum number of CPU cycles that we are allowed to
> > With 5 bits there's a ~96.9% chance of crashing the system in an attempt,
> > the exploit cannot be used for a range of attacks, including spear
> > attacks and fast-spreading worms, right? A crashed and inaccessible
> > system also increases the odds of leaving around unfinished attack code
>
On Sun, May 12, 2019 at 10:02:45AM +0200, Ingo Molnar wrote:
> * Kees Cook wrote:
> > I still think just using something very simply like rdtsc would be good
> > enough.
> >
> > This isn't meant to be a perfect defense: it's meant to disrupt the
> > ability to trivially predict (usually another
* Kees Cook wrote:
> On Sat, May 11, 2019 at 03:45:19PM -0700, Andy Lutomirski wrote:
> > ISTM maybe a better first step would be to make get_random_bytes() be
> > much faster? :)
>
> I'm not opposed to that, but I want to make sure we don't break it for
> "real" crypto uses...
I'm quite
On Sat, May 11, 2019 at 03:45:19PM -0700, Andy Lutomirski wrote:
> ISTM maybe a better first step would be to make get_random_bytes() be
> much faster? :)
I'm not opposed to that, but I want to make sure we don't break it for
"real" crypto uses...
I still think just using something very simply
On Thu, May 9, 2019 at 1:43 AM Ingo Molnar wrote:
>
>
> * Reshetova, Elena wrote:
>
> > > I find it ridiculous that even with 4K blocked get_random_bytes(),
> > > which gives us 32k bits, which with 5 bits should amortize the RNG
> > > call to something like "once per 6553 calls", we still see
* Reshetova, Elena wrote:
> > I find it ridiculous that even with 4K blocked get_random_bytes(),
> > which gives us 32k bits, which with 5 bits should amortize the RNG
> > call to something like "once per 6553 calls", we still see 17%
> > overhead? It's either a measurement artifact, or
> > I find it ridiculous that even with 4K blocked get_random_bytes(), which
> > gives us 32k bits, which with 5 bits should amortize the RNG call to
> > something like "once per 6553 calls", we still see 17% overhead? It's
> > either a measurement artifact, or something doesn't compute.
>
> If
> * Reshetova, Elena wrote:
>
> > > * Reshetova, Elena wrote:
> > >
> > > > CONFIG_PAGE_TABLE_ISOLATION=n:
> > > >
> > > > base: Simple syscall: 0.0510
> > > > microseconds
> > > > get_random_bytes(4096 bytes buffer): Simple syscall: 0.0597
> > > >
* Reshetova, Elena wrote:
> > * Reshetova, Elena wrote:
> >
> > > CONFIG_PAGE_TABLE_ISOLATION=n:
> > >
> > > base: Simple syscall: 0.0510 microseconds
> > > get_random_bytes(4096 bytes buffer): Simple syscall: 0.0597 microseconds
> > >
> > > So, pure speed
> * Reshetova, Elena wrote:
>
> > CONFIG_PAGE_TABLE_ISOLATION=n:
> >
> > base: Simple syscall: 0.0510 microseconds
> > get_random_bytes(4096 bytes buffer): Simple syscall: 0.0597 microseconds
> >
> > So, pure speed wise get_random_bytes() with 1 page per-cpu
* Reshetova, Elena wrote:
> CONFIG_PAGE_TABLE_ISOLATION=n:
>
> base: Simple syscall: 0.0510 microseconds
> get_random_bytes(4096 bytes buffer): Simple syscall: 0.0597 microseconds
>
> So, pure speed wise get_random_bytes() with 1 page per-cpu buffer wins.
..
> > rdrand (calling every 8 syscalls): Simple syscall: 0.0795 microseconds
>
> You could try something like:
> u64 rand_val = cpu_var->syscall_rand
>
> while (unlikely(rand_val == 0))
> rand_val = rdrand64();
>
> stack_offset = rand_val & 0xff;
>
> * Andy Lutomirski wrote:
>
> > Or we decide that calling get_random_bytes() is okay with IRQs off and
> > this all gets a bit simpler.
>
> BTW., before we go down this path any further, is the plan to bind this
> feature to a real CPU-RNG capability, i.e. to the RDRAND instruction,
> which
> From: Reshetova, Elena
> > Sent: 03 May 2019 17:17
> ...
> > rdrand (calling every 8 syscalls): Simple syscall: 0.0795 microseconds
>
> You could try something like:
> u64 rand_val = cpu_var->syscall_rand
>
> while (unlikely(rand_val == 0))
> rand_val = rdrand64();
>
>> On Fri, May 3, 2019 at 9:40 AM David Laight wrote:
> >
> > That gives you 10 system calls per rdrand instruction
> > and mostly takes the latency out of line.
>
> Do we really want to do this? What is the attack scenario?
>
> With no VLA's, and the stackleak plugin, what's the upside? Are we
On Fri, May 3, 2019 at 9:40 AM David Laight wrote:
>
> That gives you 10 system calls per rdrand instruction
> and mostly takes the latency out of line.
Do we really want to do this? What is the attack scenario?
With no VLA's, and the stackleak plugin, what's the upside? Are we
adding random
From: Reshetova, Elena
> Sent: 03 May 2019 17:17
...
> rdrand (calling every 8 syscalls): Simple syscall: 0.0795 microseconds
You could try something like:
u64 rand_val = cpu_var->syscall_rand
while (unlikely(rand_val == 0))
rand_val = rdrand64();
> On May 2, 2019, at 9:43 AM, Ingo Molnar wrote:
>
>
> * Andy Lutomirski wrote:
>
>>> 8 gigabits/sec sounds good throughput in principle, if there's no
>>> scalability pathologies with that.
>>
>> The latency is horrible.
>
> Latency would be amortized via batching anyway, so 8
> * David Laight wrote:
>
> > It has already been measured - it is far too slow.
>
> I don't think proper buffering was tested, was it? Only a per syscall
> RDRAND overhead which I can imagine being not too good.
>
Well, I have some numbers, but I am struggling to understand one
aspect there.
* David Laight wrote:
> It has already been measured - it is far too slow.
I don't think proper buffering was tested, was it? Only a per syscall
RDRAND overhead which I can imagine being not too good.
> > Because calling tens of millions of system calls per second will
> > deplete any
* Andy Lutomirski wrote:
> > 8 gigabits/sec sounds good throughput in principle, if there's no
> > scalability pathologies with that.
>
> The latency is horrible.
Latency would be amortized via batching anyway, so 8 gigabits/sec
suggests something on the order of magnitude of 4 bits per
From: Ingo Molnar
> Sent: 02 May 2019 16:09
> * Andy Lutomirski wrote:
>
> > Or we decide that calling get_random_bytes() is okay with IRQs off and
> > this all gets a bit simpler.
>
> BTW., before we go down this path any further, is the plan to bind this
> feature to a real CPU-RNG
On Thu, May 2, 2019 at 8:09 AM Ingo Molnar wrote:
>
>
> * Andy Lutomirski wrote:
>
> > Or we decide that calling get_random_bytes() is okay with IRQs off and
> > this all gets a bit simpler.
>
> BTW., before we go down this path any further, is the plan to bind this
> feature to a real CPU-RNG
* Andy Lutomirski wrote:
> Or we decide that calling get_random_bytes() is okay with IRQs off and
> this all gets a bit simpler.
BTW., before we go down this path any further, is the plan to bind this
feature to a real CPU-RNG capability, i.e. to the RDRAND instruction,
which excludes a
On Thu, May 2, 2019 at 2:23 AM David Laight wrote:
>
> From: Reshetova, Elena
> > Sent: 02 May 2019 09:16
> ...
> > > I'm also guessing that get_cpu_var() disables pre-emption?
> >
> > Yes, in my understanding:
> >
> > #define get_cpu_var(var) \
> >
From: Reshetova, Elena
> Sent: 02 May 2019 09:16
...
> > I'm also guessing that get_cpu_var() disables pre-emption?
>
> Yes, in my understanding:
>
> #define get_cpu_var(var) \
> (*({ \
From: Reshetova, Elena
> > Sent: 30 April 2019 18:51
> ...
> > +unsigned char random_get_byte(void)
> > +{
> > +struct rnd_buffer *buffer = _cpu_var(stack_rand_offset);
> > +unsigned char res;
> > +
> > +if (buffer->byte_counter >= RANDOM_BUFFER_SIZE) {
> > +
> From: Reshetova, Elena
> > Sent: 30 April 2019 18:51
> ...
> > I guess this is true, so I did a quick implementation now to estimate the
> > performance hits.
> > Here are the preliminary numbers (proper ones will take a bit more time):
> >
> > base: Simple syscall: 0.1761 microseconds
> >
On Wed, May 1, 2019 at 1:42 AM David Laight wrote:
>
> From: Reshetova, Elena
> > Sent: 30 April 2019 18:51
> ...
> > +unsigned char random_get_byte(void)
> > +{
> > +struct rnd_buffer *buffer = _cpu_var(stack_rand_offset);
> > +unsigned char res;
> > +
> > +if (buffer->byte_counter
From: Reshetova, Elena
> Sent: 30 April 2019 18:51
...
> +unsigned char random_get_byte(void)
> +{
> +struct rnd_buffer *buffer = _cpu_var(stack_rand_offset);
> +unsigned char res;
> +
> +if (buffer->byte_counter >= RANDOM_BUFFER_SIZE) {
> +get_random_bytes(&(buffer->buffer),
From: Reshetova, Elena
> Sent: 30 April 2019 18:51
...
> I guess this is true, so I did a quick implementation now to estimate the
> performance hits.
> Here are the preliminary numbers (proper ones will take a bit more time):
>
> base: Simple syscall: 0.1761 microseconds
> get_random_bytes (4096
On Tue, Apr 30, 2019 at 10:51 AM Reshetova, Elena
wrote:
> base: Simple syscall: 0.1761 microseconds
> get_random_bytes (4096 bytes per-cpu buffer): 0.1793 microsecons
> get_random_bytes (64 bytes per-cpu buffer): 0.1866 microsecons
The 4096 size seems pretty good.
> Below is a snip of what I
>
> > On Apr 29, 2019, at 12:46 AM, Reshetova, Elena
> wrote:
> >
> >
> On Apr 26, 2019, at 7:01 AM, Theodore Ts'o wrote:
> >>>
> >
> >> It seems to me
> >> that we should be using the “fast-erasure” construction for all
> get_random_bytes()
> >> invocations. Specifically, we should have a
> On Apr 29, 2019, at 12:46 AM, Reshetova, Elena
> wrote:
>
>
On Apr 26, 2019, at 7:01 AM, Theodore Ts'o wrote:
>>>
>
>> It seems to me
>> that we should be using the “fast-erasure” construction for all
>> get_random_bytes()
>> invocations. Specifically, we should have a per cpu
> On Fri, Apr 26, 2019 at 10:01:02AM -0400, Theodore Ts'o wrote:
> > On Fri, Apr 26, 2019 at 11:33:09AM +, Reshetova, Elena wrote:
> > > Adding Eric and Herbert to continue discussion for the chacha part.
> > > So, as a short summary I am trying to find out a fast (fast enough to be
> > >
> On Fri, Apr 26, 2019 at 11:33:09AM +, Reshetova, Elena wrote:
> > Adding Eric and Herbert to continue discussion for the chacha part.
> > So, as a short summary I am trying to find out a fast (fast enough to be
> > used per
> syscall
> > invocation) source of random bits with good enough
> > On Apr 26, 2019, at 7:01 AM, Theodore Ts'o wrote:
> >
> >> On Fri, Apr 26, 2019 at 11:33:09AM +, Reshetova, Elena wrote:
> >> Adding Eric and Herbert to continue discussion for the chacha part.
> >> So, as a short summary I am trying to find out a fast (fast enough to be
> >> used per
>
> On Apr 26, 2019, at 11:02 AM, Theodore Ts'o wrote:
>
>> On Fri, Apr 26, 2019 at 10:44:20AM -0700, Eric Biggers wrote:
>> Would it be possibly to call ChaCha20 through the actual crypto API so that
>> SIMD
>> instructions (e.g. AVX-2) could be used? That would make it *much* faster.
>>
> On Apr 26, 2019, at 7:01 AM, Theodore Ts'o wrote:
>
>> On Fri, Apr 26, 2019 at 11:33:09AM +, Reshetova, Elena wrote:
>> Adding Eric and Herbert to continue discussion for the chacha part.
>> So, as a short summary I am trying to find out a fast (fast enough to be
>> used per syscall
On Fri, Apr 26, 2019 at 10:44:20AM -0700, Eric Biggers wrote:
> Would it be possibly to call ChaCha20 through the actual crypto API so that
> SIMD
> instructions (e.g. AVX-2) could be used? That would make it *much* faster.
> Also consider AES-CTR with AES-NI instructions.
It's not obvious SIMD
On Fri, Apr 26, 2019 at 10:01:02AM -0400, Theodore Ts'o wrote:
> On Fri, Apr 26, 2019 at 11:33:09AM +, Reshetova, Elena wrote:
> > Adding Eric and Herbert to continue discussion for the chacha part.
> > So, as a short summary I am trying to find out a fast (fast enough to be
> > used per
On Fri, 2019-04-26 at 12:33 +0100, Reshetova, Elena wrote:
> 1) rdtsc or variations based on it (David proposed some CRC-based variants for
> > example)
Hi,
Could we repeatedly measure the time for a short syscall on a quiet system and
estimate the entropy we get from this? In some scenarios the
On Fri, Apr 26, 2019 at 11:33:09AM +, Reshetova, Elena wrote:
> Adding Eric and Herbert to continue discussion for the chacha part.
> So, as a short summary I am trying to find out a fast (fast enough to be used
> per syscall
> invocation) source of random bits with good enough security
> Hi,
>
> Sorry for the delay - Easter holidays + I was trying to arrange my brain
> around
> proposed options.
> Here what I think our options are with regards to the source of randomness:
>
> 1) rdtsc or variations based on it (David proposed some CRC-based variants for
> example)
> 2)
> From: Reshetova, Elena
> > Sent: 24 April 2019 12:43
> >
> > Sorry for the delay - Easter holidays + I was trying to arrange my brain
> > around
> proposed options.
> > Here what I think our options are with regards to the source of randomness:
> >
> > 1) rdtsc or variations based on it (David
From: Reshetova, Elena
> Sent: 24 April 2019 12:43
>
> Sorry for the delay - Easter holidays + I was trying to arrange my brain
> around proposed options.
> Here what I think our options are with regards to the source of randomness:
>
> 1) rdtsc or variations based on it (David proposed some
Hi,
Sorry for the delay - Easter holidays + I was trying to arrange my brain around
proposed options.
Here what I think our options are with regards to the source of randomness:
1) rdtsc or variations based on it (David proposed some CRC-based variants for
example)
2) prandom-based options
From: Theodore Ts'o
> Sent: 17 April 2019 16:16
> On Wed, Apr 17, 2019 at 09:28:35AM +, David Laight wrote:
> >
> > If you can guarantee back to back requests on the PRNG then it is probably
> > possible to recalculate its state from 'bits of state'/5 calls.
> > Depend on the PRNG this might
On Wed, Apr 17, 2019 at 10:17 AM Theodore Ts'o wrote:
>
> On Wed, Apr 17, 2019 at 09:28:35AM +, David Laight wrote:
> >
> > If you can guarantee back to back requests on the PRNG then it is probably
> > possible to recalculate its state from 'bits of state'/5 calls.
> > Depend on the PRNG
On Wed, Apr 17, 2019 at 09:28:35AM +, David Laight wrote:
>
> If you can guarantee back to back requests on the PRNG then it is probably
> possible to recalculate its state from 'bits of state'/5 calls.
> Depend on the PRNG this might be computationally expensive.
> For some PRNG it will be
From: Reshetova, Elena
> Sent: 16 April 2019 17:47
..
> > It seems though the assumption that we're assuming the attacker has
> > arbitrary ability to get the low bits of the stack, so *if* that's
> > true, then eventually, you'd be able to get enough samples that you
> > could reverse engineer
* Theodore Ts'o wrote:
> It seems though the assumption that we're assuming the attacker has
> arbitrary ability to get the low bits of the stack, so *if* that's
> true, then eventually, you'd be able to get enough samples that you
> could reverse engineer the prandom state. This could
> On Tue, Apr 16, 2019 at 11:10:16AM +, Reshetova, Elena wrote:
> > >
> > > The kernel can execute millions of syscalls per second, I'm pretty sure
> > > there's a statistical attack against:
> > >
> > > * This is a maximally equidistributed combined Tausworthe generator
> > > * based on
> So a couple of comments; I wasn't able to find the full context for
> this patch, and looking over the thread on kernel-hardening from late
> February still left me confused exactly what attacks this would help
> us protect against (since this isn't my area and I didn't take the
> time to read
On Tue, Apr 16, 2019 at 11:43:49AM -0400, Theodore Ts'o wrote:
> If it's x86 specific, maybe the simplest thing to do is to use RDRAND
> if it exists, and fall back to something involving a TSC and maybe
> prandom_u32 (assuming on how bad you think the stack leak is going to
> be) if RDRAND isn't
So a couple of comments; I wasn't able to find the full context for
this patch, and looking over the thread on kernel-hardening from late
February still left me confused exactly what attacks this would help
us protect against (since this isn't my area and I didn't take the
time to read all of the
From: Peter Zijlstra
> Sent: 16 April 2019 13:08
...
> So the argument against using TSC directly was that it might be easy to
> guess most of the TSC bits in timing attack. But IIRC there is fairly
> solid evidence that the lowest TSC bits are very hard to guess and might
> in fact be a very good
On Tue, Apr 16, 2019 at 11:10:16AM +, Reshetova, Elena wrote:
> >
> > The kernel can execute millions of syscalls per second, I'm pretty sure
> > there's a statistical attack against:
> >
> > * This is a maximally equidistributed combined Tausworthe generator
> > * based on code from GNU
Adding Theodore & Daniel since I guess they are the best positioned to comment
on
exact strengths of prandom. See my comments below.
> * Reshetova, Elena wrote:
>
> > > 4)
> > >
> > > But before you tweak the patch, a more fundamental question:
> > >
> > > Does the stack offset have to be per
* Reshetova, Elena wrote:
> > 4)
> >
> > But before you tweak the patch, a more fundamental question:
> >
> > Does the stack offset have to be per *syscall execution* randomized?
> > Which threats does this protect against that a simpler per task syscall
> > random offset wouldn't protect
Hi Ingo,
Thank you for your feedback! See my comments below.
> * Elena Reshetova wrote:
>
> > This is an example of produced assembly code for gcc x86_64:
> >
> > ...
> > add_random_stack_offset();
> > 0x810022e9 callq 0x81459570
> > 0x810022ee movzbl %al,%eax
> >
* Elena Reshetova wrote:
> This is an example of produced assembly code for gcc x86_64:
>
> ...
> add_random_stack_offset();
> 0x810022e9 callq 0x81459570
> 0x810022ee movzbl %al,%eax
> 0x810022f1 add$0x16,%rax
> 0x810022f5 and$0x1f8,%eax
>
On Thu, Apr 11, 2019 at 10:36 PM Reshetova, Elena
wrote:
>
> > On Wed, Apr 10, 2019 at 3:24 AM Reshetova, Elena
> > wrote:
> > >
> > >
> > > > > > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > > > > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > >
> On Wed, Apr 10, 2019 at 3:24 AM Reshetova, Elena
> wrote:
> >
> >
> > > > > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > > > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > > > > index 7bc105f47d21..38ddc213a5e9 100644
> > > > > > ---
On Wed, Apr 10, 2019 at 3:24 AM Reshetova, Elena
wrote:
>
>
> > > > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > > > index 7bc105f47d21..38ddc213a5e9 100644
> > > > > --- a/arch/x86/entry/common.c
>
> > > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > > index 7bc105f47d21..38ddc213a5e9 100644
> > > > --- a/arch/x86/entry/common.c
> > > > +++ b/arch/x86/entry/common.c
> > > > @@ -35,6 +35,12 @@
> >
* Reshetova, Elena wrote:
> > * Josh Poimboeuf wrote:
> >
> > > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > > index 7bc105f47d21..38ddc213a5e9 100644
> > > > --- a/arch/x86/entry/common.c
> > >
> * Josh Poimboeuf wrote:
>
> > On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > > index 7bc105f47d21..38ddc213a5e9 100644
> > > --- a/arch/x86/entry/common.c
> > > +++ b/arch/x86/entry/common.c
> > > @@
* Josh Poimboeuf wrote:
> On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > index 7bc105f47d21..38ddc213a5e9 100644
> > --- a/arch/x86/entry/common.c
> > +++ b/arch/x86/entry/common.c
> > @@ -35,6 +35,12 @@
>
On Mon, Apr 8, 2019 at 6:31 AM Reshetova, Elena
wrote:
> Originally I was thinking that in-stack randomization makes sense
> only for x86_64, since this is what VMAP stack on x86 depends on.
> Without VMAP stack and guard pages, there are easier ways to attack,
> so hardening there does not
> On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> > diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> > index 7bc105f47d21..38ddc213a5e9 100644
> > --- a/arch/x86/entry/common.c
> > +++ b/arch/x86/entry/common.c
> > @@ -35,6 +35,12 @@
> > #define
On Mon, Apr 08, 2019 at 09:13:58AM +0300, Elena Reshetova wrote:
> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
> index 7bc105f47d21..38ddc213a5e9 100644
> --- a/arch/x86/entry/common.c
> +++ b/arch/x86/entry/common.c
> @@ -35,6 +35,12 @@
> #define CREATE_TRACE_POINTS
>
80 matches
Mail list logo