Hi!
> On Fri, Oct 28, 2016 at 03:05:22PM +0100, Mark Rutland wrote:
> >
> > > > * the precise semantics of performance counter events varies drastically
> > > > across implementations. PERF_COUNT_HW_CACHE_MISSES, might only map to
> > > > one particular level of cache, and/or may not be imple
On 01.11.2016 09:10, Pavel Machek wrote:
cpu family : 6
model: 23
model name : Intel(R) Core(TM)2 Duo CPU E7400 @ 2.80GHz
stepping : 10
microcode: 0xa07
so rowhammerjs/native is not available for this system. Bit mapping
for memory hash functions would
Hi!
> * Pavel Machek wrote:
>
> > I'm not going to buy broken hardware just for a test.
>
> Can you suggest a method to find heavily rowhammer affected hardware? Only by
> testing it, or are there some chipset IDs ranges or dmidecode info that will
> pinpoint potentially affected machines?
T
On 01.11.2016 07:33, Ingo Molnar wrote:
Can you suggest a method to find heavily rowhammer affected hardware? Only by
testing it, or are there some chipset IDs ranges or dmidecode info that will
pinpoint potentially affected machines?
I have worked with many different systems both on running ro
On Tue, 2016-11-01 at 07:33 +0100, Ingo Molnar wrote:
> * Pavel Machek wrote:
>
> > I'm not going to buy broken hardware just for a test.
>
> Can you suggest a method to find heavily rowhammer affected hardware?
> Only by
> testing it, or are there some chipset IDs ranges or dmidecode info
> th
* Pavel Machek wrote:
> I'm not going to buy broken hardware just for a test.
Can you suggest a method to find heavily rowhammer affected hardware? Only by
testing it, or are there some chipset IDs ranges or dmidecode info that will
pinpoint potentially affected machines?
Thanks,
In
On Mon, Oct 31, 2016 at 10:13:03PM +0100, Pavel Machek wrote:
> On Mon 2016-10-31 14:47:39, Mark Rutland wrote:
> > On Mon, Oct 31, 2016 at 09:27:05AM +0100, Pavel Machek wrote:
> > > > On Fri, Oct 28, 2016 at 01:21:36PM +0200, Pavel Machek wrote:
> > > > > > Has this been tested on a system vulner
On Mon 2016-10-31 14:47:39, Mark Rutland wrote:
> On Mon, Oct 31, 2016 at 09:27:05AM +0100, Pavel Machek wrote:
> > > On Fri, Oct 28, 2016 at 01:21:36PM +0200, Pavel Machek wrote:
> > > > > Has this been tested on a system vulnerable to rowhammer, and if so,
> > > > > was
> > > > > it reliable in
On Mon, Oct 31, 2016 at 09:27:05AM +0100, Pavel Machek wrote:
> > On Fri, Oct 28, 2016 at 01:21:36PM +0200, Pavel Machek wrote:
> > > > Has this been tested on a system vulnerable to rowhammer, and if so, was
> > > > it reliable in mitigating the issue?
> > > I do not have vulnerable machine near
Hi!
> On Fri, Oct 28, 2016 at 01:21:36PM +0200, Pavel Machek wrote:
> > > Has this been tested on a system vulnerable to rowhammer, and if so, was
> > > it reliable in mitigating the issue?
> > >
> > > Which particular attack codebase was it tested against?
> >
> > I have rowhammer-test here,
>
On 30.10.2016 00:01, Pavel Machek wrote:
Hmm, maybe I'm glad I don't have a new machine :-).
I assume you still get _some_ bitflips with generic "rowhammer"?
1 or 2 every 20-30 minutes...
On Sat 2016-10-29 23:49:57, Daniel Gruss wrote:
> On 29.10.2016 23:45, Pavel Machek wrote:
> >indy/sandy/haswell/skylake, so I'll just use the generic version...?)
>
> yes, generic might work, but i never tested it on anything that old...
>
> on my system i have >30 bit flips per second (ivy brid
On 29.10.2016 23:45, Pavel Machek wrote:
indy/sandy/haswell/skylake, so I'll just use the generic version...?)
yes, generic might work, but i never tested it on anything that old...
on my system i have >30 bit flips per second (ivy bridge i5-3xxx) with
the rowhammer-ivy test... sometimes even
On Sat 2016-10-29 23:07:59, Daniel Gruss wrote:
> On 29.10.2016 23:05, Pavel Machek wrote:
> >So far I did bzip2 and kernel compilation. I believe I can prevent
> >flips in rowhammer-test with bzip2 going from 4 seconds to 5
> >seconds... let me see.
>
> can you prevent bitflips in this one?
> htt
On 29.10.2016 23:05, Pavel Machek wrote:
So far I did bzip2 and kernel compilation. I believe I can prevent
flips in rowhammer-test with bzip2 going from 4 seconds to 5
seconds... let me see.
can you prevent bitflips in this one?
https://github.com/IAIK/rowhammerjs/tree/master/native
Ok, le
Hi!
On Sat 2016-10-29 22:05:16, Daniel Gruss wrote:
> On 29.10.2016 21:42, Pavel Machek wrote:
> >Congratulations. Now I'd like to take away your toys :-).
>
> I'm would like you to do that, but I'm very confident you're not successful
> the way your starting ;)
:-). Lets see.
> >Not in my test
On 29.10.2016 21:42, Pavel Machek wrote:
Congratulations. Now I'd like to take away your toys :-).
I'm would like you to do that, but I'm very confident you're not
successful the way your starting ;)
Not in my testing.
Have you tried music/video reencoding? Games? Anything that works with
Hi!
> I think that this idea to mitigate Rowhammer is not a good approach.
Well.. it does not have to be good if it is the best we have.
> I wrote Rowhammer.js (we published a paper on that) and I had the first
> reproducible bit flips on DDR4 at both, increased and default refresh rates
> (publ
I think that this idea to mitigate Rowhammer is not a good approach.
I wrote Rowhammer.js (we published a paper on that) and I had the first
reproducible bit flips on DDR4 at both, increased and default refresh
rates (published in our DRAMA paper).
We have researched the number of cache misse
On Fri, Oct 28, 2016 at 08:30:14PM +0200, Pavel Machek wrote:
> Would you (or someone) have pointer to good documentation source on
> available performance counters?
The Intel SDM has a section on them and the AMD Bios and Kernel
Developers Guide does too.
That is, they contain lists of available
On Fri 2016-10-28 16:18:40, Peter Zijlstra wrote:
> On Fri, Oct 28, 2016 at 03:05:22PM +0100, Mark Rutland wrote:
> >
> > > > * the precise semantics of performance counter events varies drastically
> > > > across implementations. PERF_COUNT_HW_CACHE_MISSES, might only map to
> > > > one parti
Hi!
> On Fri, Oct 28, 2016 at 01:21:36PM +0200, Pavel Machek wrote:
> > > Has this been tested on a system vulnerable to rowhammer, and if so, was
> > > it reliable in mitigating the issue?
> > >
> > > Which particular attack codebase was it tested against?
> >
> > I have rowhammer-test here,
>
On Fri, Oct 28, 2016 at 03:05:22PM +0100, Mark Rutland wrote:
>
> > > * the precise semantics of performance counter events varies drastically
> > > across implementations. PERF_COUNT_HW_CACHE_MISSES, might only map to
> > > one particular level of cache, and/or may not be implemented on all
>
Hi,
On Fri, Oct 28, 2016 at 01:21:36PM +0200, Pavel Machek wrote:
> > Has this been tested on a system vulnerable to rowhammer, and if so, was
> > it reliable in mitigating the issue?
> >
> > Which particular attack codebase was it tested against?
>
> I have rowhammer-test here,
>
> commit 9824
Hi!
> > I agree this needs to be tunable (and with the other suggestions). But
> > this is actually not the most important tunable: the detection
> > threshold (rh_attr.sample_period) should be way more important.
> >
> > And yes, this will all need to be tunable, somehow. But lets verify
> > tha
Hi!
> > I agree this needs to be tunable (and with the other suggestions). But
> > this is actually not the most important tunable: the detection
> > threshold (rh_attr.sample_period) should be way more important.
>
> So being totally ignorant of the detail of how rowhammer abuses the DDR
> thing
Hi!
> I missed the original, so I've lost some context.
You can read it on lkml, but I guess you did not lose anything
important.
> Has this been tested on a system vulnerable to rowhammer, and if so, was
> it reliable in mitigating the issue?
>
> Which particular attack codebase was it tested
On Fri, Oct 28, 2016 at 11:35:47AM +0200, Ingo Molnar wrote:
>
> * Vegard Nossum wrote:
>
> > Would it make sense to sample the counter on context switch, do some
> > accounting on a per-task cache miss counter, and slow down just the
> > single task(s) with a too high cache miss rate? That way
Hi,
I missed the original, so I've lost some context.
Has this been tested on a system vulnerable to rowhammer, and if so, was
it reliable in mitigating the issue?
Which particular attack codebase was it tested against?
On Thu, Oct 27, 2016 at 11:27:47PM +0200, Pavel Machek wrote:
> --- /dev/nu
On 28 October 2016 at 11:35, Ingo Molnar wrote:
>
> * Vegard Nossum wrote:
>
>> Would it make sense to sample the counter on context switch, do some
>> accounting on a per-task cache miss counter, and slow down just the
>> single task(s) with a too high cache miss rate? That way there's no
>> glo
* Vegard Nossum wrote:
> Would it make sense to sample the counter on context switch, do some
> accounting on a per-task cache miss counter, and slow down just the
> single task(s) with a too high cache miss rate? That way there's no
> global slowdown (which I assume would be the case here). The
On 28 October 2016 at 11:04, Peter Zijlstra wrote:
> On Fri, Oct 28, 2016 at 10:50:39AM +0200, Pavel Machek wrote:
>> On Fri 2016-10-28 09:07:01, Ingo Molnar wrote:
>> >
>> > * Pavel Machek wrote:
>> >
>> > > +static void rh_overflow(struct perf_event *event, struct
>> > > perf_sample_data *data
On Fri, Oct 28, 2016 at 10:50:39AM +0200, Pavel Machek wrote:
> On Fri 2016-10-28 09:07:01, Ingo Molnar wrote:
> >
> > * Pavel Machek wrote:
> >
> > > +static void rh_overflow(struct perf_event *event, struct
> > > perf_sample_data *data, struct pt_regs *regs)
> > > +{
> > > + u64 *ts = this_cp
* Pavel Machek wrote:
> On Fri 2016-10-28 09:07:01, Ingo Molnar wrote:
> >
> > * Pavel Machek wrote:
> >
> > > +static void rh_overflow(struct perf_event *event, struct
> > > perf_sample_data *data, struct pt_regs *regs)
> > > +{
> > > + u64 *ts = this_cpu_ptr(&rh_timestamp); /* this is NMI
On Fri 2016-10-28 09:07:01, Ingo Molnar wrote:
>
> * Pavel Machek wrote:
>
> > +static void rh_overflow(struct perf_event *event, struct perf_sample_data
> > *data, struct pt_regs *regs)
> > +{
> > + u64 *ts = this_cpu_ptr(&rh_timestamp); /* this is NMI context */
> > + u64 now = ktime_get_
* Pavel Machek wrote:
> +static void rh_overflow(struct perf_event *event, struct perf_sample_data
> *data, struct pt_regs *regs)
> +{
> + u64 *ts = this_cpu_ptr(&rh_timestamp); /* this is NMI context */
> + u64 now = ktime_get_mono_fast_ns();
> + s64 delta = now - *ts;
> +
> +
Hi!
> > if (event)
> > perf_event_release_kernel(event);
> > }
> > }
>
> This is pretty cool. Are there workloads other than rowhammer that
> could trip this, and if so, how bad would this delay be for them?
>
> At the very least, this could be beh
On Thu, Oct 27, 2016 at 2:33 AM, Peter Zijlstra wrote:
> On Thu, Oct 27, 2016 at 11:11:04AM +0200, Pavel Machek wrote:
>> How to work around rowhammer, break my system _and_ make kernel perf
>> maintainers scream at the same time: (:-) )
>>
>> I think I got the place now. Let me try...
>
> Lol ;-)
On Wed, Oct 26, 2016 at 10:54:16PM +0200, Pavel Machek wrote:
> Hi!
>
> I'd like to get an interrupt every million cache misses... to do a
> printk() or something like that. As far as I can tell, modern hardware
> should allow me to do that. AFAICT performance events subsystem can do
> something l
Hi!
> > I'd like to get an interrupt every million cache misses... to do a
> > printk() or something like that. As far as I can tell, modern hardware
> > should allow me to do that. AFAICT performance events subsystem can do
> > something like that, but I can't figure out where the code is / what
On Thu 2016-10-27 10:28:01, Peter Zijlstra wrote:
> On Wed, Oct 26, 2016 at 10:54:16PM +0200, Pavel Machek wrote:
> > Hi!
> >
> > I'd like to get an interrupt every million cache misses... to do a
> > printk() or something like that. As far as I can tell, modern hardware
> > should allow me to do
On Thu, Oct 27, 2016 at 11:11:04AM +0200, Pavel Machek wrote:
> How to work around rowhammer, break my system _and_ make kernel perf
> maintainers scream at the same time: (:-) )
>
> I think I got the place now. Let me try...
Lol ;-)
>
> diff --git a/arch/x86/events/core.c b/arch/x86/events/cor
On Thu, Oct 27, 2016 at 10:46:38AM +0200, Pavel Machek wrote:
> And actually, printk() is not needed, udelay(50msec) is. Reason is,
> that DRAM becomes unreliable if about milion cache misses happen in
> under 64msec -- so I'd like to slow the system down in such cases to
> prevent bug from biting
Hi!
I'd like to get an interrupt every million cache misses... to do a
printk() or something like that. As far as I can tell, modern hardware
should allow me to do that. AFAICT performance events subsystem can do
something like that, but I can't figure out where the code is / what I
should call.
44 matches
Mail list logo