- On Nov 20, 2017, at 2:44 PM, Thomas Gleixner t...@linutronix.de wrote:
> On Mon, 20 Nov 2017, Mathieu Desnoyers wrote:
>> - On Nov 20, 2017, at 12:48 PM, Thomas Gleixner t...@linutronix.de wrote:
>> The use-case for 4k memcpy operation is a per-cpu ring buffer where
>> the rseq fast-path
- On Nov 20, 2017, at 1:49 PM, Andi Kleen a...@firstfloor.org wrote:
>> Having cpu_opv do a 4k memcpy allow it to handle scenarios where
>> rseq fails to progress.
>
> If anybody ever gets that right. It will be really hard to just
> test such a path.
>
> It also seems fairly theoretical to
On Mon, 20 Nov 2017, Mathieu Desnoyers wrote:
> - On Nov 20, 2017, at 12:48 PM, Thomas Gleixner t...@linutronix.de wrote:
> The use-case for 4k memcpy operation is a per-cpu ring buffer where
> the rseq fast-path does the following:
>
> - ring buffer push: in the rseq asm instruction sequence,
> Having cpu_opv do a 4k memcpy allow it to handle scenarios where
> rseq fails to progress.
If anybody ever gets that right. It will be really hard to just
test such a path.
It also seems fairly theoretical to me. Do you even have a
test case where the normal path stops making forward progress?
- On Nov 20, 2017, at 1:03 PM, Thomas Gleixner t...@linutronix.de wrote:
> On Mon, 20 Nov 2017, Thomas Gleixner wrote:
>> On Mon, 20 Nov 2017, Mathieu Desnoyers wrote:
>> > >> + * The reason why we require all pointer offsets to be calculated by
>> > >> + * user-space beforehand is because we
- On Nov 20, 2017, at 12:48 PM, Thomas Gleixner t...@linutronix.de wrote:
> On Mon, 20 Nov 2017, Mathieu Desnoyers wrote:
>> - On Nov 16, 2017, at 6:26 PM, Thomas Gleixner t...@linutronix.de wrote:
>> >> +#define NR_PINNED_PAGES_ON_STACK 8
>> >
>> > 8 pinned pages on stack? Which stack?
>
On Mon, 20 Nov 2017, Thomas Gleixner wrote:
> On Mon, 20 Nov 2017, Mathieu Desnoyers wrote:
> > >> + * The reason why we require all pointer offsets to be calculated by
> > >> + * user-space beforehand is because we need to use get_user_pages_fast()
> > >> + * to first pin all pages touched by each
On Mon, 20 Nov 2017, Mathieu Desnoyers wrote:
> - On Nov 16, 2017, at 6:26 PM, Thomas Gleixner t...@linutronix.de wrote:
> >> +#define NR_PINNED_PAGES_ON_STACK 8
> >
> > 8 pinned pages on stack? Which stack?
>
> The common cases need to touch few pages, and we can keep the
> pointers in an a
- On Nov 17, 2017, at 3:22 PM, Thomas Gleixner t...@linutronix.de wrote:
> On Fri, 17 Nov 2017, Mathieu Desnoyers wrote:
>> - On Nov 17, 2017, at 5:09 AM, Thomas Gleixner t...@linutronix.de wrote:
>> 7) Allow libraries with multi-part algorithms to work on same per-cpu
>>data without a
- On Nov 16, 2017, at 6:26 PM, Thomas Gleixner t...@linutronix.de wrote:
> On Tue, 14 Nov 2017, Mathieu Desnoyers wrote:
>> +#ifdef __KERNEL__
>> +# include
>> +#else /* #ifdef __KERNEL__ */
>
> Sigh.
fixed.
>
>> +# include
>> +#endif /* #else #ifdef __KERNEL_
On Fri, Nov 17, 2017 at 12:07 PM, Thomas Gleixner wrote:
> On Fri, 17 Nov 2017, Andi Kleen wrote:
>> > The most straight forward is to have a mechanism which forces everything
>> > into the slow path in case of debugging, lack of progress, etc. The slow
>>
>> That's the abort address, right?
>
> Y
On Fri, 17 Nov 2017, Mathieu Desnoyers wrote:
> - On Nov 17, 2017, at 5:09 AM, Thomas Gleixner t...@linutronix.de wrote:
> 7) Allow libraries with multi-part algorithms to work on same per-cpu
>data without affecting the allowed cpu mask
>
> I stumbled on an interesting use-case within the
On Fri, 17 Nov 2017, Andi Kleen wrote:
> > The most straight forward is to have a mechanism which forces everything
> > into the slow path in case of debugging, lack of progress, etc. The slow
>
> That's the abort address, right?
Yes.
> For the generic case the fall back path would require disab
> The most straight forward is to have a mechanism which forces everything
> into the slow path in case of debugging, lack of progress, etc. The slow
That's the abort address, right?
For the generic case the fall back path would require disabling preemption
unfortunately, for which we don't have
On Fri, 17 Nov 2017, Andi Kleen wrote:
> > 1) Handling single-stepping from tools
> >
> > Tools like debuggers, and simulators like record-replay ("rr") use
> > single-stepping to run through existing programs. If core libraries start
>
> No rr doesn't use single stepping. It uses branch stepping
Thanks for the detailed write up. That should have been in the
changelog...
Some comments below. Overall I think the case for the syscall is still
very weak.
> Let's have a look at why cpu_opv is needed. I'll make sure to enhance the
> changelog and documentation to include that information.
>
>
- On Nov 17, 2017, at 5:09 AM, Thomas Gleixner t...@linutronix.de wrote:
> On Thu, 16 Nov 2017, Andi Kleen wrote:
>> My preference would be just to drop this new super ugly system call.
>>
>> It's also not just the ugliness, but the very large attack surface
>> that worries me here.
>>
>> As
On Thu, 16 Nov 2017, Andi Kleen wrote:
> My preference would be just to drop this new super ugly system call.
>
> It's also not just the ugliness, but the very large attack surface
> that worries me here.
>
> As far as I know it is only needed to support single stepping, correct?
I can't figure
My preference would be just to drop this new super ugly system call.
It's also not just the ugliness, but the very large attack surface
that worries me here.
As far as I know it is only needed to support single stepping, correct?
We already have other code that cannot be single stepped, most
p
On Tue, 14 Nov 2017, Mathieu Desnoyers wrote:
> +#ifdef __KERNEL__
> +# include
> +#else/* #ifdef __KERNEL__ */
Sigh.
> +# include
> +#endif /* #else #ifdef __KERNEL__ */
> +
> +#include
> +
> +#ifdef __LP64__
> +# define CPU_OP_FIELD_u32_u64(field)
- On Nov 15, 2017, at 2:44 AM, Michael Kerrisk mtk.manpa...@gmail.com wrote:
> Hi Matthieu
>
> On 14 November 2017 at 21:03, Mathieu Desnoyers
> wrote:
>> This new cpu_opv system call executes a vector of operations on behalf
>> of user-space on a specific CPU with preemption disabled. It is
Hi Matthieu
On 14 November 2017 at 21:03, Mathieu Desnoyers
wrote:
> This new cpu_opv system call executes a vector of operations on behalf
> of user-space on a specific CPU with preemption disabled. It is inspired
> from readv() and writev() system calls which take a "struct iovec" array
> as ar
- On Nov 14, 2017, at 3:03 PM, Mathieu Desnoyers
mathieu.desnoy...@efficios.com wrote:
[...]
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 3b448ba82225..cab256c1720a 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1209,6 +1209,8 @@ static inline void
This new cpu_opv system call executes a vector of operations on behalf
of user-space on a specific CPU with preemption disabled. It is inspired
from readv() and writev() system calls which take a "struct iovec" array
as argument.
The operations available are: comparison, memcpy, add, or, and, xor,
24 matches
Mail list logo