On Tue, May 22, 2018 at 07:05:48PM +0300, Boaz Harrosh wrote:
> On 18/05/18 17:14, Christopher Lameter wrote:
> > On Tue, 15 May 2018, Boaz Harrosh wrote:
> >
> >>> I don't think page tables work the way you think they work.
> >>>
> >>> + err = vm_insert_pfn_prot(zt->vma, zt_addr,
On Tue, May 22, 2018 at 07:05:48PM +0300, Boaz Harrosh wrote:
> On 18/05/18 17:14, Christopher Lameter wrote:
> > On Tue, 15 May 2018, Boaz Harrosh wrote:
> >
> >>> I don't think page tables work the way you think they work.
> >>>
> >>> + err = vm_insert_pfn_prot(zt->vma, zt_addr,
Dave Hansen wrote:
> On 05/22/2018 10:51 AM, Matthew Wilcox wrote:
>> But CR3 is a per-CPU register. So it'd be *possible* to allocate one
>> PGD per CPU (per process). Have them be identical in all but one of
>> the PUD entries. Then you've reserved 1/512 of your
Dave Hansen wrote:
> On 05/22/2018 10:51 AM, Matthew Wilcox wrote:
>> But CR3 is a per-CPU register. So it'd be *possible* to allocate one
>> PGD per CPU (per process). Have them be identical in all but one of
>> the PUD entries. Then you've reserved 1/512 of your address space for
>> per-CPU
On 05/22/2018 10:51 AM, Matthew Wilcox wrote:
> But CR3 is a per-CPU register. So it'd be *possible* to allocate one
> PGD per CPU (per process). Have them be identical in all but one of
> the PUD entries. Then you've reserved 1/512 of your address space for
> per-CPU pages.
>
> Complicated,
On 05/22/2018 10:51 AM, Matthew Wilcox wrote:
> But CR3 is a per-CPU register. So it'd be *possible* to allocate one
> PGD per CPU (per process). Have them be identical in all but one of
> the PUD entries. Then you've reserved 1/512 of your address space for
> per-CPU pages.
>
> Complicated,
On Tue, May 22, 2018 at 10:03:54AM -0700, Dave Hansen wrote:
> On 05/22/2018 09:46 AM, Christopher Lameter wrote:
> > On Tue, 22 May 2018, Dave Hansen wrote:
> >
> >> On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
> >>> How can we implement "Private memory"?
> >> Per-cpu page tables would do it.
> >
On Tue, May 22, 2018 at 10:03:54AM -0700, Dave Hansen wrote:
> On 05/22/2018 09:46 AM, Christopher Lameter wrote:
> > On Tue, 22 May 2018, Dave Hansen wrote:
> >
> >> On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
> >>> How can we implement "Private memory"?
> >> Per-cpu page tables would do it.
> >
On Tue, 22 May 2018, Dave Hansen wrote:
> On 05/22/2018 09:46 AM, Christopher Lameter wrote:
> > On Tue, 22 May 2018, Dave Hansen wrote:
> >
> >> On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
> >>> How can we implement "Private memory"?
> >> Per-cpu page tables would do it.
> > We already have that
On Tue, 22 May 2018, Dave Hansen wrote:
> On 05/22/2018 09:46 AM, Christopher Lameter wrote:
> > On Tue, 22 May 2018, Dave Hansen wrote:
> >
> >> On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
> >>> How can we implement "Private memory"?
> >> Per-cpu page tables would do it.
> > We already have that
On 05/22/2018 09:46 AM, Christopher Lameter wrote:
> On Tue, 22 May 2018, Dave Hansen wrote:
>
>> On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
>>> How can we implement "Private memory"?
>> Per-cpu page tables would do it.
> We already have that for percpu subsystem. See alloc_percpu()
I actually
On 05/22/2018 09:46 AM, Christopher Lameter wrote:
> On Tue, 22 May 2018, Dave Hansen wrote:
>
>> On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
>>> How can we implement "Private memory"?
>> Per-cpu page tables would do it.
> We already have that for percpu subsystem. See alloc_percpu()
I actually
On Tue, May 22, 2018 at 04:46:05PM +, Christopher Lameter wrote:
> On Tue, 22 May 2018, Dave Hansen wrote:
>
> > On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
> > > How can we implement "Private memory"?
> >
> > Per-cpu page tables would do it.
>
> We already have that for percpu subsystem.
On Tue, May 22, 2018 at 04:46:05PM +, Christopher Lameter wrote:
> On Tue, 22 May 2018, Dave Hansen wrote:
>
> > On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
> > > How can we implement "Private memory"?
> >
> > Per-cpu page tables would do it.
>
> We already have that for percpu subsystem.
On Tue, 22 May 2018, Dave Hansen wrote:
> On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
> > How can we implement "Private memory"?
>
> Per-cpu page tables would do it.
We already have that for percpu subsystem. See alloc_percpu()
On Tue, 22 May 2018, Dave Hansen wrote:
> On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
> > How can we implement "Private memory"?
>
> Per-cpu page tables would do it.
We already have that for percpu subsystem. See alloc_percpu()
On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
> How can we implement "Private memory"?
Per-cpu page tables would do it.
On 05/22/2018 09:05 AM, Boaz Harrosh wrote:
> How can we implement "Private memory"?
Per-cpu page tables would do it.
On 18/05/18 17:14, Christopher Lameter wrote:
> On Tue, 15 May 2018, Boaz Harrosh wrote:
>
>>> I don't think page tables work the way you think they work.
>>>
>>> + err = vm_insert_pfn_prot(zt->vma, zt_addr, pfn, prot);
>>>
>>> That doesn't just insert it into the local CPU's page
On 18/05/18 17:14, Christopher Lameter wrote:
> On Tue, 15 May 2018, Boaz Harrosh wrote:
>
>>> I don't think page tables work the way you think they work.
>>>
>>> + err = vm_insert_pfn_prot(zt->vma, zt_addr, pfn, prot);
>>>
>>> That doesn't just insert it into the local CPU's page
On Tue, 15 May 2018, Boaz Harrosh wrote:
> > I don't think page tables work the way you think they work.
> >
> > + err = vm_insert_pfn_prot(zt->vma, zt_addr, pfn, prot);
> >
> > That doesn't just insert it into the local CPU's page table. Any CPU
> > which directly accesses or even
On Tue, 15 May 2018, Boaz Harrosh wrote:
> > I don't think page tables work the way you think they work.
> >
> > + err = vm_insert_pfn_prot(zt->vma, zt_addr, pfn, prot);
> >
> > That doesn't just insert it into the local CPU's page table. Any CPU
> > which directly accesses or even
On 15/05/18 17:17, Peter Zijlstra wrote:
<>
>>
>> So I would love some mm guy to explain where are those bits collected?
>
> Depends on the architecture, some architectures only ever set bits,
> some, like x86, clear bits again. You want to look at switch_mm().
>
> Basically x86 clears the bit
On 15/05/18 17:17, Peter Zijlstra wrote:
<>
>>
>> So I would love some mm guy to explain where are those bits collected?
>
> Depends on the architecture, some architectures only ever set bits,
> some, like x86, clear bits again. You want to look at switch_mm().
>
> Basically x86 clears the bit
On 15/05/18 17:18, Matthew Wilcox wrote:
> On Tue, May 15, 2018 at 05:10:57PM +0300, Boaz Harrosh wrote:
>> I'm not a lawyer either but I think I'm doing OK. Because I am doing exactly
>> like FUSE is doing. Only some 15 years later, with modern CPUs in mind. I do
>> not
>> think I am doing
On 15/05/18 17:18, Matthew Wilcox wrote:
> On Tue, May 15, 2018 at 05:10:57PM +0300, Boaz Harrosh wrote:
>> I'm not a lawyer either but I think I'm doing OK. Because I am doing exactly
>> like FUSE is doing. Only some 15 years later, with modern CPUs in mind. I do
>> not
>> think I am doing
On 05/14/2018 10:28 AM, Boaz Harrosh wrote:
> The VM_LOCAL_CPU flag tells the Kernel that the vma will be used
> from a single-core only, and therefore invalidation (flush_tlb) of
> PTE(s) need not be a wide CPU scheduling.
This doesn't work on x86. We load TLB entries for lots of reasons, even
On 05/14/2018 10:28 AM, Boaz Harrosh wrote:
> The VM_LOCAL_CPU flag tells the Kernel that the vma will be used
> from a single-core only, and therefore invalidation (flush_tlb) of
> PTE(s) need not be a wide CPU scheduling.
This doesn't work on x86. We load TLB entries for lots of reasons, even
On Tue, May 15, 2018 at 05:10:57PM +0300, Boaz Harrosh wrote:
> I'm not a lawyer either but I think I'm doing OK. Because I am doing exactly
> like FUSE is doing. Only some 15 years later, with modern CPUs in mind. I do
> not
> think I am doing anything new here, am I?
You should talk to a
On Tue, May 15, 2018 at 05:10:57PM +0300, Boaz Harrosh wrote:
> I'm not a lawyer either but I think I'm doing OK. Because I am doing exactly
> like FUSE is doing. Only some 15 years later, with modern CPUs in mind. I do
> not
> think I am doing anything new here, am I?
You should talk to a
On Tue, May 15, 2018 at 02:54:29PM +0300, Boaz Harrosh wrote:
> At the beginning I was wishful thinking that the mm_cpumask(vma->vm_mm)
> should have a single bit set just as the affinity of the thread on
> creation of that thread. But then I saw that at %80 of the times some
> other random bits
On Tue, May 15, 2018 at 02:54:29PM +0300, Boaz Harrosh wrote:
> At the beginning I was wishful thinking that the mm_cpumask(vma->vm_mm)
> should have a single bit set just as the affinity of the thread on
> creation of that thread. But then I saw that at %80 of the times some
> other random bits
On 15/05/18 16:50, Matthew Wilcox wrote:
> On Tue, May 15, 2018 at 04:29:22PM +0300, Boaz Harrosh wrote:
>> On 15/05/18 15:03, Matthew Wilcox wrote:
>>> You're getting dangerously close to admitting that the entire point
>>> of this exercise is so that you can link non-GPL NetApp code into the
>>>
On 15/05/18 16:50, Matthew Wilcox wrote:
> On Tue, May 15, 2018 at 04:29:22PM +0300, Boaz Harrosh wrote:
>> On 15/05/18 15:03, Matthew Wilcox wrote:
>>> You're getting dangerously close to admitting that the entire point
>>> of this exercise is so that you can link non-GPL NetApp code into the
>>>
On Tue, May 15, 2018 at 04:29:22PM +0300, Boaz Harrosh wrote:
> On 15/05/18 15:03, Matthew Wilcox wrote:
> > You're getting dangerously close to admitting that the entire point
> > of this exercise is so that you can link non-GPL NetApp code into the
> > kernel in clear violation of the GPL.
>
>
On Tue, May 15, 2018 at 04:29:22PM +0300, Boaz Harrosh wrote:
> On 15/05/18 15:03, Matthew Wilcox wrote:
> > You're getting dangerously close to admitting that the entire point
> > of this exercise is so that you can link non-GPL NetApp code into the
> > kernel in clear violation of the GPL.
>
>
On 15/05/18 15:03, Matthew Wilcox wrote:
> On Tue, May 15, 2018 at 02:41:41PM +0300, Boaz Harrosh wrote:
>> That would be very hard. Because that program would:
>> - need to be root
>> - need to start and pretend it is zus Server with the all mount
>> thread thing, register new filesystem, grab
On 15/05/18 15:03, Matthew Wilcox wrote:
> On Tue, May 15, 2018 at 02:41:41PM +0300, Boaz Harrosh wrote:
>> That would be very hard. Because that program would:
>> - need to be root
>> - need to start and pretend it is zus Server with the all mount
>> thread thing, register new filesystem, grab
On 15/05/18 14:54, Boaz Harrosh wrote:
> On 15/05/18 03:44, Matthew Wilcox wrote:
>> On Mon, May 14, 2018 at 02:49:01PM -0700, Andrew Morton wrote:
>>> On Mon, 14 May 2018 20:28:01 +0300 Boaz Harrosh wrote:
In this project we utilize a per-core server thread so everything
On 15/05/18 14:54, Boaz Harrosh wrote:
> On 15/05/18 03:44, Matthew Wilcox wrote:
>> On Mon, May 14, 2018 at 02:49:01PM -0700, Andrew Morton wrote:
>>> On Mon, 14 May 2018 20:28:01 +0300 Boaz Harrosh wrote:
In this project we utilize a per-core server thread so everything
is kept local.
On 15/05/18 15:07, Mark Rutland wrote:
> On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
>> On 15/05/18 03:41, Matthew Wilcox wrote:
>>> On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
On 14/05/18 22:15, Matthew Wilcox wrote:
> On Mon, May 14, 2018 at
On 15/05/18 15:07, Mark Rutland wrote:
> On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
>> On 15/05/18 03:41, Matthew Wilcox wrote:
>>> On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
On 14/05/18 22:15, Matthew Wilcox wrote:
> On Mon, May 14, 2018 at
On Tue, May 15, 2018 at 01:07:51PM +0100, Mark Rutland wrote:
> // speculatively allocates TLB
Ohh, right, I completely forgot about that, but that actually does
happen. We had trouble with AMD doing just that only about a year ago or
so IIRC.
CPUs are
On Tue, May 15, 2018 at 01:07:51PM +0100, Mark Rutland wrote:
> // speculatively allocates TLB
Ohh, right, I completely forgot about that, but that actually does
happen. We had trouble with AMD doing just that only about a year ago or
so IIRC.
CPUs are
On 15/05/18 15:09, Peter Zijlstra wrote:
> On Tue, May 15, 2018 at 02:41:41PM +0300, Boaz Harrosh wrote:
>> On 15/05/18 14:11, Matthew Wilcox wrote:
>
>>> You're still thinking about this from the wrong perspective. If you
>>> were writing a program to attack this facility, how would you do it?
On 15/05/18 15:09, Peter Zijlstra wrote:
> On Tue, May 15, 2018 at 02:41:41PM +0300, Boaz Harrosh wrote:
>> On 15/05/18 14:11, Matthew Wilcox wrote:
>
>>> You're still thinking about this from the wrong perspective. If you
>>> were writing a program to attack this facility, how would you do it?
On 15/05/18 14:47, Peter Zijlstra wrote:
> On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
>> Yes I know, but that is exactly the point of this flag. I know that this
>> address is only ever accessed from a single core. Because it is an mmap (vma)
>> of an O_TMPFILE-exclusive file
On 15/05/18 14:47, Peter Zijlstra wrote:
> On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
>> Yes I know, but that is exactly the point of this flag. I know that this
>> address is only ever accessed from a single core. Because it is an mmap (vma)
>> of an O_TMPFILE-exclusive file
On Tue, May 15, 2018 at 02:41:41PM +0300, Boaz Harrosh wrote:
> On 15/05/18 14:11, Matthew Wilcox wrote:
> > You're still thinking about this from the wrong perspective. If you
> > were writing a program to attack this facility, how would you do it?
> > It's not exactly hard to leak one
On Tue, May 15, 2018 at 02:41:41PM +0300, Boaz Harrosh wrote:
> On 15/05/18 14:11, Matthew Wilcox wrote:
> > You're still thinking about this from the wrong perspective. If you
> > were writing a program to attack this facility, how would you do it?
> > It's not exactly hard to leak one
On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
> On 15/05/18 03:41, Matthew Wilcox wrote:
> > On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
> >> On 14/05/18 22:15, Matthew Wilcox wrote:
> >>> On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
> On a
On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
> On 15/05/18 03:41, Matthew Wilcox wrote:
> > On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
> >> On 14/05/18 22:15, Matthew Wilcox wrote:
> >>> On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
> On a
On Tue, May 15, 2018 at 02:41:41PM +0300, Boaz Harrosh wrote:
> That would be very hard. Because that program would:
> - need to be root
> - need to start and pretend it is zus Server with the all mount
> thread thing, register new filesystem, grab some pmem devices.
> - Mount the said
On Tue, May 15, 2018 at 02:41:41PM +0300, Boaz Harrosh wrote:
> That would be very hard. Because that program would:
> - need to be root
> - need to start and pretend it is zus Server with the all mount
> thread thing, register new filesystem, grab some pmem devices.
> - Mount the said
On 15/05/18 03:44, Matthew Wilcox wrote:
> On Mon, May 14, 2018 at 02:49:01PM -0700, Andrew Morton wrote:
>> On Mon, 14 May 2018 20:28:01 +0300 Boaz Harrosh wrote:
>>> In this project we utilize a per-core server thread so everything
>>> is kept local. If we use the regular
On 15/05/18 03:44, Matthew Wilcox wrote:
> On Mon, May 14, 2018 at 02:49:01PM -0700, Andrew Morton wrote:
>> On Mon, 14 May 2018 20:28:01 +0300 Boaz Harrosh wrote:
>>> In this project we utilize a per-core server thread so everything
>>> is kept local. If we use the regular zap_ptes() API All
On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
> Yes I know, but that is exactly the point of this flag. I know that this
> address is only ever accessed from a single core. Because it is an mmap (vma)
> of an O_TMPFILE-exclusive file created in a core-pinned thread and I allow
>
On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
> Yes I know, but that is exactly the point of this flag. I know that this
> address is only ever accessed from a single core. Because it is an mmap (vma)
> of an O_TMPFILE-exclusive file created in a core-pinned thread and I allow
>
On 15/05/18 14:11, Matthew Wilcox wrote:
> On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
>> On 15/05/18 03:41, Matthew Wilcox wrote:
>>> On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
On 14/05/18 22:15, Matthew Wilcox wrote:
> On Mon, May 14, 2018 at
On 15/05/18 14:11, Matthew Wilcox wrote:
> On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
>> On 15/05/18 03:41, Matthew Wilcox wrote:
>>> On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
On 14/05/18 22:15, Matthew Wilcox wrote:
> On Mon, May 14, 2018 at
On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
> On 15/05/18 03:41, Matthew Wilcox wrote:
> > On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
> >> On 14/05/18 22:15, Matthew Wilcox wrote:
> >>> On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
> On a
On Tue, May 15, 2018 at 01:43:23PM +0300, Boaz Harrosh wrote:
> On 15/05/18 03:41, Matthew Wilcox wrote:
> > On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
> >> On 14/05/18 22:15, Matthew Wilcox wrote:
> >>> On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
> On a
On 15/05/18 10:08, Christoph Hellwig wrote:
> On Mon, May 14, 2018 at 09:26:13PM +0300, Boaz Harrosh wrote:
>> I am please pushing for this patch ahead of the push of ZUFS, because
>> this is the only patch we need from otherwise an STD Kernel.
>>
>> We are partnering with Distro(s) to push ZUFS
On 15/05/18 10:08, Christoph Hellwig wrote:
> On Mon, May 14, 2018 at 09:26:13PM +0300, Boaz Harrosh wrote:
>> I am please pushing for this patch ahead of the push of ZUFS, because
>> this is the only patch we need from otherwise an STD Kernel.
>>
>> We are partnering with Distro(s) to push ZUFS
On 15/05/18 03:41, Matthew Wilcox wrote:
> On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
>> On 14/05/18 22:15, Matthew Wilcox wrote:
>>> On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
On a call to mmap an mmap provider (like an FS) can put
this flag on
On 15/05/18 03:41, Matthew Wilcox wrote:
> On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
>> On 14/05/18 22:15, Matthew Wilcox wrote:
>>> On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
On a call to mmap an mmap provider (like an FS) can put
this flag on
On Mon, May 14, 2018 at 09:26:13PM +0300, Boaz Harrosh wrote:
> I am please pushing for this patch ahead of the push of ZUFS, because
> this is the only patch we need from otherwise an STD Kernel.
>
> We are partnering with Distro(s) to push ZUFS out-of-tree to beta clients
> to try and stabilize
On Mon, May 14, 2018 at 09:26:13PM +0300, Boaz Harrosh wrote:
> I am please pushing for this patch ahead of the push of ZUFS, because
> this is the only patch we need from otherwise an STD Kernel.
>
> We are partnering with Distro(s) to push ZUFS out-of-tree to beta clients
> to try and stabilize
On Mon, May 14, 2018 at 02:49:01PM -0700, Andrew Morton wrote:
> On Mon, 14 May 2018 20:28:01 +0300 Boaz Harrosh wrote:
> > In this project we utilize a per-core server thread so everything
> > is kept local. If we use the regular zap_ptes() API All CPU's
> > are scheduled for
On Mon, May 14, 2018 at 02:49:01PM -0700, Andrew Morton wrote:
> On Mon, 14 May 2018 20:28:01 +0300 Boaz Harrosh wrote:
> > In this project we utilize a per-core server thread so everything
> > is kept local. If we use the regular zap_ptes() API All CPU's
> > are scheduled for the unmap, though
On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
> On 14/05/18 22:15, Matthew Wilcox wrote:
> > On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
> >> On a call to mmap an mmap provider (like an FS) can put
> >> this flag on vma->vm_flags.
> >>
> >> The VM_LOCAL_CPU flag
On Mon, May 14, 2018 at 10:37:38PM +0300, Boaz Harrosh wrote:
> On 14/05/18 22:15, Matthew Wilcox wrote:
> > On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
> >> On a call to mmap an mmap provider (like an FS) can put
> >> this flag on vma->vm_flags.
> >>
> >> The VM_LOCAL_CPU flag
On Mon, 14 May 2018 20:28:01 +0300 Boaz Harrosh wrote:
> On a call to mmap an mmap provider (like an FS) can put
> this flag on vma->vm_flags.
>
> The VM_LOCAL_CPU flag tells the Kernel that the vma will be used
> from a single-core only, and therefore invalidation (flush_tlb)
On Mon, 14 May 2018 20:28:01 +0300 Boaz Harrosh wrote:
> On a call to mmap an mmap provider (like an FS) can put
> this flag on vma->vm_flags.
>
> The VM_LOCAL_CPU flag tells the Kernel that the vma will be used
> from a single-core only, and therefore invalidation (flush_tlb) of
> PTE(s) need
On 14/05/18 22:15, Matthew Wilcox wrote:
> On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
>> On a call to mmap an mmap provider (like an FS) can put
>> this flag on vma->vm_flags.
>>
>> The VM_LOCAL_CPU flag tells the Kernel that the vma will be used
>> from a single-core only, and
On 14/05/18 22:15, Matthew Wilcox wrote:
> On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
>> On a call to mmap an mmap provider (like an FS) can put
>> this flag on vma->vm_flags.
>>
>> The VM_LOCAL_CPU flag tells the Kernel that the vma will be used
>> from a single-core only, and
On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
> On a call to mmap an mmap provider (like an FS) can put
> this flag on vma->vm_flags.
>
> The VM_LOCAL_CPU flag tells the Kernel that the vma will be used
> from a single-core only, and therefore invalidation (flush_tlb) of
> PTE(s)
On Mon, May 14, 2018 at 08:28:01PM +0300, Boaz Harrosh wrote:
> On a call to mmap an mmap provider (like an FS) can put
> this flag on vma->vm_flags.
>
> The VM_LOCAL_CPU flag tells the Kernel that the vma will be used
> from a single-core only, and therefore invalidation (flush_tlb) of
> PTE(s)
On 14/05/18 20:28, Boaz Harrosh wrote:
>
> On a call to mmap an mmap provider (like an FS) can put
> this flag on vma->vm_flags.
>
> The VM_LOCAL_CPU flag tells the Kernel that the vma will be used
> from a single-core only, and therefore invalidation (flush_tlb) of
> PTE(s) need not be a wide
On 14/05/18 20:28, Boaz Harrosh wrote:
>
> On a call to mmap an mmap provider (like an FS) can put
> this flag on vma->vm_flags.
>
> The VM_LOCAL_CPU flag tells the Kernel that the vma will be used
> from a single-core only, and therefore invalidation (flush_tlb) of
> PTE(s) need not be a wide
80 matches
Mail list logo