Am 04.10.21 um 15:27 schrieb Jason Gunthorpe:
On Mon, Oct 04, 2021 at 03:22:22PM +0200, Christian König wrote:
That use case is completely unrelated to GUP and when this doesn't work we
have quite a problem.
My read is that unmap_mapping_range() guarentees the physical TLB
hardware is
On Mon, Oct 04, 2021 at 03:22:22PM +0200, Christian König wrote:
> That use case is completely unrelated to GUP and when this doesn't work we
> have quite a problem.
My read is that unmap_mapping_range() guarentees the physical TLB
hardware is serialized across all CPUs upon return.
It also
Am 04.10.21 um 15:11 schrieb Jason Gunthorpe:
On Mon, Oct 04, 2021 at 08:58:35AM +0200, Christian König wrote:
I'm not following this discussion to closely, but try to look into it from
time to time.
Am 01.10.21 um 19:45 schrieb Jason Gunthorpe:
On Fri, Oct 01, 2021 at 11:01:49AM -0600, Logan
On Mon, Oct 04, 2021 at 08:58:35AM +0200, Christian König wrote:
> I'm not following this discussion to closely, but try to look into it from
> time to time.
>
> Am 01.10.21 um 19:45 schrieb Jason Gunthorpe:
> > On Fri, Oct 01, 2021 at 11:01:49AM -0600, Logan Gunthorpe wrote:
> >
> > > In
I'm not following this discussion to closely, but try to look into it
from time to time.
Am 01.10.21 um 19:45 schrieb Jason Gunthorpe:
On Fri, Oct 01, 2021 at 11:01:49AM -0600, Logan Gunthorpe wrote:
In device-dax, the refcount is only used to prevent the device, and
therefore the pages,
On 2021-10-01 4:46 p.m., Jason Gunthorpe wrote:
> On Fri, Oct 01, 2021 at 04:22:28PM -0600, Logan Gunthorpe wrote:
>
>>> It would close this issue, however synchronize_rcu() is very slow
>>> (think > 1second) in some cases and thus cannot be inserted here.
>>
>> It shouldn't be *that* slow,
On 10/1/21 15:46, Jason Gunthorpe wrote:
On Fri, Oct 01, 2021 at 04:22:28PM -0600, Logan Gunthorpe wrote:
It would close this issue, however synchronize_rcu() is very slow
(think > 1second) in some cases and thus cannot be inserted here.
It shouldn't be *that* slow, at least not the vast
On Fri, Oct 01, 2021 at 04:22:28PM -0600, Logan Gunthorpe wrote:
> > It would close this issue, however synchronize_rcu() is very slow
> > (think > 1second) in some cases and thus cannot be inserted here.
>
> It shouldn't be *that* slow, at least not the vast majority of the
> time... it seems a
On 2021-10-01 4:14 p.m., Jason Gunthorpe wrote:
> On Fri, Oct 01, 2021 at 02:13:14PM -0600, Logan Gunthorpe wrote:
>>
>>
>> On 2021-10-01 11:45 a.m., Jason Gunthorpe wrote:
Before the invalidation, an active flag is cleared to ensure no new
mappings can be created while the unmap is
On Fri, Oct 01, 2021 at 02:13:14PM -0600, Logan Gunthorpe wrote:
>
>
> On 2021-10-01 11:45 a.m., Jason Gunthorpe wrote:
> >> Before the invalidation, an active flag is cleared to ensure no new
> >> mappings can be created while the unmap is proceeding.
> >> unmap_mapping_range() should sequence
On 2021-10-01 11:45 a.m., Jason Gunthorpe wrote:
>> Before the invalidation, an active flag is cleared to ensure no new
>> mappings can be created while the unmap is proceeding.
>> unmap_mapping_range() should sequence itself with the TLB flush and
>
> AFIAK unmap_mapping_range() kicks off the
On Fri, Oct 01, 2021 at 11:01:49AM -0600, Logan Gunthorpe wrote:
> In device-dax, the refcount is only used to prevent the device, and
> therefore the pages, from going away on device unbind. Pages cannot be
> recycled, as you say, as they are mapped linearly within the device. The
> address
On 2021-10-01 7:48 a.m., Jason Gunthorpe wrote:
> On Wed, Sep 29, 2021 at 09:36:52PM -0300, Jason Gunthorpe wrote:
>
>> Why would DAX want to do this in the first place?? This means the
>> address space zap is much more important that just speeding up
>> destruction, it is essential for
On Wed, Sep 29, 2021 at 09:36:52PM -0300, Jason Gunthorpe wrote:
> Why would DAX want to do this in the first place?? This means the
> address space zap is much more important that just speeding up
> destruction, it is essential for correctness since the PTEs are not
> holding refcounts
On Wed, Sep 29, 2021 at 05:49:36PM -0600, Logan Gunthorpe wrote:
> Some of this seems out of date. Pretty sure the pages are not refcounted
> with vmf_insert_mixed() and vmf_insert_mixed() is currently the only way
> to use VM_MIXEDMAP mappings.
Hum.
vmf_insert_mixed() boils down to
On 2021-09-29 5:35 p.m., Jason Gunthorpe wrote:
> On Wed, Sep 29, 2021 at 05:27:22PM -0600, Logan Gunthorpe wrote:
>
>>> finish_fault() should set the pte_devmap - eg by passing the
>>> PFN_DEV|PFN_MAP somehow through the vma->vm_page_prot to mk_pte() or
>>> otherwise signaling do_set_pte()
On Wed, Sep 29, 2021 at 05:27:22PM -0600, Logan Gunthorpe wrote:
> > finish_fault() should set the pte_devmap - eg by passing the
> > PFN_DEV|PFN_MAP somehow through the vma->vm_page_prot to mk_pte() or
> > otherwise signaling do_set_pte() that it should set those PTE bits
> > when it creates the
On 2021-09-29 5:05 p.m., Jason Gunthorpe wrote:
> On Wed, Sep 29, 2021 at 03:42:00PM -0600, Logan Gunthorpe wrote:
>
>> The main reason is probably this: if we don't use VM_MIXEDMAP, then we
>> can't set pte_devmap().
>
> I think that is an API limitation in the fault routines..
>
>
On Wed, Sep 29, 2021 at 03:42:00PM -0600, Logan Gunthorpe wrote:
> The main reason is probably this: if we don't use VM_MIXEDMAP, then we
> can't set pte_devmap().
I think that is an API limitation in the fault routines..
finish_fault() should set the pte_devmap - eg by passing the
On 2021-09-28 2:05 p.m., Jason Gunthorpe wrote:
> On Thu, Sep 16, 2021 at 05:40:59PM -0600, Logan Gunthorpe wrote:
>
>> +static void pci_p2pdma_unmap_mappings(void *data)
>> +{
>> +struct pci_dev *pdev = data;
>> +struct pci_p2pdma *p2pdma = rcu_dereference_protected(pdev->p2pdma, 1);
On 2021-09-28 1:55 p.m., Jason Gunthorpe wrote:
> On Thu, Sep 16, 2021 at 05:40:59PM -0600, Logan Gunthorpe wrote:
>> +int pci_mmap_p2pmem(struct pci_dev *pdev, struct vm_area_struct *vma)
>> +{
>> +struct pci_p2pdma_map *pmap;
>> +struct pci_p2pdma *p2pdma;
>> +int ret;
>> +
>> +
On Thu, Sep 16, 2021 at 05:40:59PM -0600, Logan Gunthorpe wrote:
> +static void pci_p2pdma_unmap_mappings(void *data)
> +{
> + struct pci_dev *pdev = data;
> + struct pci_p2pdma *p2pdma = rcu_dereference_protected(pdev->p2pdma, 1);
> +
> + p2pdma->active = false;
> +
On Thu, Sep 16, 2021 at 05:40:59PM -0600, Logan Gunthorpe wrote:
> +int pci_mmap_p2pmem(struct pci_dev *pdev, struct vm_area_struct *vma)
> +{
> + struct pci_p2pdma_map *pmap;
> + struct pci_p2pdma *p2pdma;
> + int ret;
> +
> + /* prevent private mappings from being established */
On Thu, Sep 16, 2021 at 05:40:59PM -0600, Logan Gunthorpe wrote:
> Introduce pci_mmap_p2pmem() which is a helper to allocate and mmap
> a hunk of p2pmem into userspace.
>
> Pages are allocated from the genalloc in bulk and their reference count
> incremented. They are returned to the genalloc
24 matches
Mail list logo