- On Sep 19, 2019, at 12:26 PM, Will Deacon w...@kernel.org wrote:
[...]
>>
>> The current wording from membarrier(2) is:
>>
>> The "expedited" commands complete faster than the
>> non-expedited
>> ones; they never block, but have the downside of causing
>>
Hi Mathieu,
Sorry for the delay in responding.
On Fri, Sep 13, 2019 at 10:22:28AM -0400, Mathieu Desnoyers wrote:
> - On Sep 12, 2019, at 11:47 AM, Will Deacon w...@kernel.org wrote:
>
> > On Thu, Sep 12, 2019 at 03:24:35PM +0100, Linus Torvalds wrote:
> >> On Thu, Sep 12, 2019 at 2:48 PM
- On Sep 13, 2019, at 12:04 PM, Oleg Nesterov o...@redhat.com wrote:
> On 09/13, Mathieu Desnoyers wrote:
>>
>> membarrier_exec_mmap(), which seems to be affected by the same problem.
>
> IIRC, in the last version it is called by exec_mmap() undef task_lock(),
> so it should fine.
Fair
On 09/13, Mathieu Desnoyers wrote:
>
> membarrier_exec_mmap(), which seems to be affected by the same problem.
IIRC, in the last version it is called by exec_mmap() undef task_lock(),
so it should fine.
Oleg.
- On Sep 9, 2019, at 7:00 AM, Oleg Nesterov o...@redhat.com wrote:
> On 09/08, Mathieu Desnoyers wrote:
>>
>> +static void sync_runqueues_membarrier_state(struct mm_struct *mm)
>> +{
>> +int membarrier_state = atomic_read(>membarrier_state);
>> +bool fallback = false;
>> +
- On Sep 12, 2019, at 11:47 AM, Will Deacon w...@kernel.org wrote:
> On Thu, Sep 12, 2019 at 03:24:35PM +0100, Linus Torvalds wrote:
>> On Thu, Sep 12, 2019 at 2:48 PM Will Deacon wrote:
>> >
>> > So the man page for sys_membarrier states that the expedited variants
>> > "never
>> > block",
On Thu, Sep 12, 2019 at 03:24:35PM +0100, Linus Torvalds wrote:
> On Thu, Sep 12, 2019 at 2:48 PM Will Deacon wrote:
> >
> > So the man page for sys_membarrier states that the expedited variants "never
> > block", which feels pretty strong. Do any other system calls claim to
> > provide this
On Thu, Sep 12, 2019 at 2:48 PM Will Deacon wrote:
>
> So the man page for sys_membarrier states that the expedited variants "never
> block", which feels pretty strong. Do any other system calls claim to
> provide this guarantee without a failure path if blocking is necessary?
The traditional
On Tue, Sep 10, 2019 at 05:48:02AM -0400, Mathieu Desnoyers wrote:
> - On Sep 8, 2019, at 5:51 PM, Linus Torvalds
> torva...@linux-foundation.org wrote:
>
> > On Sun, Sep 8, 2019 at 6:49 AM Mathieu Desnoyers
> > wrote:
> >>
> >> +static void sync_runqueues_membarrier_state(struct mm_struct
- On Sep 8, 2019, at 5:51 PM, Linus Torvalds torva...@linux-foundation.org
wrote:
> On Sun, Sep 8, 2019 at 6:49 AM Mathieu Desnoyers
> wrote:
>>
>> +static void sync_runqueues_membarrier_state(struct mm_struct *mm)
>> +{
>> + int membarrier_state = atomic_read(>membarrier_state);
>> +
On 09/08, Mathieu Desnoyers wrote:
>
> +static void sync_runqueues_membarrier_state(struct mm_struct *mm)
> +{
> + int membarrier_state = atomic_read(>membarrier_state);
> + bool fallback = false;
> + cpumask_var_t tmpmask;
> + int cpu;
> +
> + if (atomic_read(>mm_users) == 1
On Sun, Sep 8, 2019 at 6:49 AM Mathieu Desnoyers
wrote:
>
> +static void sync_runqueues_membarrier_state(struct mm_struct *mm)
> +{
> + int membarrier_state = atomic_read(>membarrier_state);
> + bool fallback = false;
> + cpumask_var_t tmpmask;
> +
> + if
The membarrier_state field is located within the mm_struct, which
is not guaranteed to exist when used from runqueue-lock-free iteration
on runqueues by the membarrier system call.
Copy the membarrier_state from the mm_struct into the scheduler runqueue
when the scheduler switches between mm.
13 matches
Mail list logo