Hi Dave
I hope you are well. Logan and I were looking at adding DMA support to PMEM and
then were informed you have proposed some patches to do just that for the ioat
DMA engine. The latest version of those I can see were the v7 from August 2017.
Is there a more recent version? What happened to
>Here's where I left it last
>
> https://git.kernel.org/pub/scm/linux/kernel/git/djiang/linux.git/log/?h=pmem_blk_dma
Thanks Dave. I'll certainly rebase these on 4.18.x and do some testing!
> I do think we need to do some rework with the dmaengine in order to get
> better efficiency
> Any plans adding the capability to nvme-rdma? Should be
> straight-forward... In theory, the use-case would be rdma backend
> fabric behind. Shouldn't be hard to test either...
Nice idea Sagi. Yes we have been starting to look at that. Though again we
would probably want to impose the "attached
> > Ideally, we'd want to use an NVME CMB buffer as p2p memory. This would
> > save an extra PCI transfer as the NVME card could just take the data
> > out of it's own memory. However, at this time, cards with CMB buffers
> > don't seem to be available.
> Can you describe what would be the plan to
>> So Oliver (CC) was having issues getting any of that to work for us.
>>
>> The problem is that acccording to him (I didn't double check the latest
>> patches) you effectively hotplug the PCIe memory into the system when
>> creating struct pages.
>>
>> This cannot possibly work for us. First we
Thanks for the detailed review Bjorn!
>>
>> + Enabling this option will also disable ACS on all ports behind
>> + any PCIe switch. This effictively puts all devices behind any
>> + switch into the same IOMMU group.
>
> Does this really mean "all devices behind the same Root Port
> I agree, I don't think this series should target anything other than
> using p2p memory located in one of the devices expected to participate
> in the p2p trasnaction for a first pass..
I disagree. There is definitely interest in using a NVMe CMB as a bounce buffer
and in deploying systems
> your kernel provider needs to decide whether they favor device assignment or
> p2p
Thanks Alex! The hardware requirements for P2P (switch, high performance EPs)
are such that we really only expect CONFIG_P2P_DMA to be enabled in specific
instances and in those instances the users have made a
> The intention of HMM is to be useful for all device memory that wish
> to have struct page for various reasons.
Hi Jermone and thanks for your input! Understood. We have looked at HMM in the
past and long term I definitely would like to consider how we can add P2P
functionality to HMM for both
>> We'd prefer to have a generic way to get p2pmem instead of restricting
>> ourselves to only using CMBs. We did work in the past where the P2P memory
>> was part of an IB adapter and not the NVMe card. So this won't work if it's
>> an NVMe only interface.
> It just seems like it it makin
> I'm pretty sure the spec disallows routing-to-self so doing a P2P
> transaction in that sense isn't going to work unless the device
> specifically supports it and intercepts the traffic before it gets to
> the port.
This is correct. Unless the device intercepts the TLP before it hits the
roo
> No, locality matters. If you have a bunch of NICs and bunch of drives
> and the allocator chooses to put all P2P memory on a single drive your
> performance will suck horribly even if all the traffic is offloaded.
Sagi brought this up earlier in his comments about the _find_ function.
> There's a meaningful difference between writing to an NVMe CMB vs PMR
When the PMR spec becomes public we can discuss how best to integrate it into
the P2P framework (if at all) ;-).
Stephen
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.
> We don't want to lump these all together without knowing which region you're
> allocating from, right?
In all seriousness I do agree with you on these Keith in the long term. We
would consider adding property flags for the memory as it is added to the p2p
core and then the allocator could evo
>http://nvmexpress.org/wp-content/uploads/NVM-Express-1.3-Ratified-TPs.zip
@Keith - my apologies.
@Christoph - thanks for the link
So my understanding of when the technical content surrounding new NVMe
Technical Proposals (TPs) was wrong. I though the TP content could only be
discussed onc
> It seems people miss-understand HMM :(
Hi Jerome
Your unhappy face emoticon made me sad so I went off to (re)read up on HMM.
Along the way I came up with a couple of things.
While hmm.txt is really nice to read it makes no mention of DEVICE_PRIVATE and
DEVICE_PUBLIC. It also gives no indica
>Yes i need to document that some more in hmm.txt...
Hi Jermone, thanks for the explanation. Can I suggest you update hmm.txt with
what you sent out?
> I am about to send RFC for nouveau, i am still working out some bugs.
Great. I will keep an eye out for it. An example user of hmm will be
>> It sounds like you have very tight hardware expectations for this to work
>> at this moment. You also don't want to generalize this code for others and
>> address the shortcomings.
> No, that's the way the community has pushed this work
Hi Sinan
Thanks for all the input. As Logan has pointed
Hi Sinan
>If hardware doesn't support it, blacklisting should have been the right
>path and I still think that you should remove all switch business from the
> code.
>I did not hear enough justification for having a switch requirement
>for P2P.
We disagree. As does the communit
>I assume you want to exclude Root Ports because of multi-function
> devices and the "route to self" error. I was hoping for a reference
> to that so I could learn more about it.
Apologies Bjorn. This slipped through my net. I will try and get you a
reference for RTS in the next couple of days
> P2P over PCI/PCI-X is quite common in devices like raid controllers.
Hi Dan
Do you mean between PCIe devices below the RAID controller? Isn't it pretty
novel to be able to support PCIe EPs below a RAID controller (as opposed to
SCSI based devices)?
> It would be useful if those configuratio
> I've seen the response that peers directly below a Root Port could not
> DMA to each other through the Root Port because of the "route to self"
> issue, and I'm not disputing that.
Bjorn
You asked me for a reference to RTS in the PCIe specification. As luck would
have it I ended up in an I
> That would be very nice but many devices do not support the internal
> route.
But Logan in the NVMe case we are discussing movement within a single function
(i.e. from a NVMe namespace to a NVMe CMB on the same function). Bjorn is
discussing movement between two functions (PFs or VFs) in the
> I'll see if I can get our PCI SIG people to follow this through
Hi Jonathan
Can you let me know if this moves forward within PCI-SIG? I would like to track
it. I can see this being doable between Root Ports that reside in the same Root
Complex but might become more challenging to standard
Hi Christian
> AMD APUs mandatory need the ACS flag set for the GPU integrated in the
> CPU when IOMMU is enabled or otherwise you will break SVM.
OK but in this case aren't you losing (many of) the benefits of P2P since all
DMAs will now get routed up to the IOMMU before being passed down
Hi Dan
>It seems unwieldy that this is a compile time option and not a runtime
>option. Can't we have a kernel command line option to opt-in to this
>behavior rather than require a wholly separate kernel image?
I think because of the security implications associated with p2pdma and
Hi Don
>Well, p2p DMA is a function of a cooperating 'agent' somewhere above the two
>devices.
>That agent should 'request' to the kernel that ACS be removed/circumvented
> (p2p enabled) btwn two endpoints.
>I recommend doing so via a sysfs method.
Yes we looked at something like this i
Hi Jerome
>I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
>translation for a virtual address. Device can then use that address
>directly without going through IOMMU for translation.
This makes sen
Hi Alex
>But it would be a much easier proposal to disable ACS when the IOMMU is
>not enabled, ACS has no real purpose in that case.
I guess one issue I have with this is that it disables IOMMU groups for all
Root Ports and not just the one(s) we wish to do p2pdma on.
>The IOMM
>Yeah, so based on the discussion I'm leaning toward just having a
>command line option that takes a list of BDFs and disables ACS for them.
>(Essentially as Dan has suggested.) This avoids the shotgun.
I concur that this seems to be where the conversation is taking us.
@Alex - Before
Hi Alex and Don
>Correct, the VM has no concept of the host's IOMMU groups, only the
> hypervisor knows about the groups,
But as I understand it these groups are usually passed through to VMs on a
pre-group basis by the hypervisor? So IOMMU group 1 might be passed to VM A and
IOMMU group
Hi Logan
>Yeah, I'm having a hard time coming up with an easy enough solution for
>the user. I agree with Dan though, the bus renumbering risk would be
>fairly low in the custom hardware seeing the switches are likely going
>to be directly soldered to the same board with the CPU.
Hi Don
>RDMA VFs lend themselves to NVMEoF w/device-assignment need a way to
>put NVME 'resources' into an assignable/manageable object for
> 'IOMMU-grouping',
>which is really a 'DMA security domain' and less an 'IOMMU grouping
> domain'.
Ha, I like your term "DMA Security
Jerome and Christian
> I think there is confusion here, Alex properly explained the scheme
> PCIE-device do a ATS request to the IOMMU which returns a valid
> translation for a virtual address. Device can then use that address
> directly without going through IOMMU for translation.
So I went
Christian
>Interesting point, give me a moment to check that. That finally makes
>all the hardware I have standing around here valuable :)
Yes. At the very least it provides an initial standards based path for P2P DMAs
across RPs which is something we have discussed on this list in
Hi Jerome
> Now inside that page table you can point GPU virtual address
> to use GPU memory or use system memory. Those system memory entry can
> also be mark as ATS against a given PASID.
Thanks. This all makes sense.
But do you have examples of this in a kernel driver (if so can you poin
Hi Christian
> Why would a switch not identify that as a peer address? We use the PASID
>together with ATS to identify the address space which a transaction
>should use.
I think you are conflating two types of TLPs here. If the device supports ATS
then it will issue a TR TLP to obtain
Hi Jerome
> As it is tie to PASID this is done using IOMMU so looks for caller
> of amd_iommu_bind_pasid() or intel_svm_bind_mm() in GPU the existing
> user is the AMD GPU driver see:
Ah thanks. This cleared things up for me. A quick search shows there are still
no users of intel_svm_bind_m
> Not to me. In the p2pdma code we specifically program DMA engines with
> the PCI bus address.
Ah yes of course. Brain fart on my part. We are not programming the P2PDMA
initiator with an IOVA but with the PCI bus address...
> So regardless of whether we are using the IOMMU or
> not, the packe
Hi Jerome
>Note on GPU we do would not rely on ATS for peer to peer. Some part
>of the GPU (DMA engines) do not necessarily support ATS. Yet those
>are the part likely to be use in peer to peer.
OK this is good to know. I agree the DMA engine is probably one of the GPU
components mos
Hi Jerome
>Hopes this helps understanding the big picture. I over simplify thing and
>devils is in the details.
This was a great primer thanks for putting it together. An LWN.net article
perhaps ;-)??
Stephen
___
Linux-nvdimm mailing
>I find this hard to believe. There's always the possibility that some
>part of the system doesn't support ACS so if the PCI bus addresses and
>IOVA overlap there's a good chance that P2P and ATS won't work at all on
>some hardware.
I tend to agree but this comes down to how IOV
All
> Alex (or anyone else) can you point to where IOVA addresses are generated?
A case of RTFM perhaps (though a pointer to the code would still be
appreciated).
https://www.kernel.org/doc/Documentation/Intel-IOMMU.txt
Some exceptions to IOVA
---
Interrupt ranges are not a
ONE_DEVICE IO memory with struct pages.
Stephen Bates (2):
iopmem : Add a block device driver for PCIe attached IO memory.
iopmem : Add documentation for iopmem driver
Documentation/blockdev/00-INDEX | 2 +
Documentation/blockdev/iopmem.txt | 62 +++
MAINTAINERS
Add documentation for the iopmem PCIe device driver.
Signed-off-by: Stephen Bates
Signed-off-by: Logan Gunthorpe
---
Documentation/blockdev/00-INDEX | 2 ++
Documentation/blockdev/iopmem.txt | 62 +++
2 files changed, 64 insertions(+)
create mode 100644
Add a new block device driver that binds to PCIe devices and turns
PCIe BARs into DAX capable block devices.
Signed-off-by: Stephen Bates
Signed-off-by: Logan Gunthorpe
---
MAINTAINERS| 7 ++
drivers/block/Kconfig | 27
drivers/block/Makefile | 1 +
drivers/block
/pipermail/linux-nvdimm/2015-August/001810.html
[2] https://lists.01.org/pipermail/linux-nvdimm/2015-October/002387.html
Signed-off-by: Stephen Bates
Signed-off-by: Logan Gunthorpe
---
drivers/dax/pmem.c| 4 +-
drivers/nvdimm/pmem.c | 4 +-
include/linux/
On Wed, Oct 19, 2016 at 10:50:25AM -0700, Dan Williams wrote:
> On Tue, Oct 18, 2016 at 2:42 PM, Stephen Bates wrote:
> > From: Logan Gunthorpe
> >
> > We build on recent work that adds memory regions owned by a device
> > driver (ZONE_DEVICE) [1] and to add struct
On Tue, Oct 18, 2016 at 08:51:15PM -0700, Dan Williams wrote:
> [ adding Ashok and David for potential iommu comments ]
>
Hi Dan
Thanks for adding Ashok and David!
>
> I agree with the motivation and the need for a solution, but I have
> some questions about this implementation.
>
> >
> > Consum
> >>
> >> If you're only using the block-device as a entry-point to create
> >> dax-mappings then a device-dax (drivers/dax/) character-device might
> >> be a better fit.
> >>
> >
> > We chose a block device because we felt it was intuitive for users to
> > carve up a memory region but putting a DA
Hi Dave and Christoph
On Fri, Oct 21, 2016 at 10:12:53PM +1100, Dave Chinner wrote:
> On Fri, Oct 21, 2016 at 02:57:14AM -0700, Christoph Hellwig wrote:
> > On Fri, Oct 21, 2016 at 10:22:39AM +1100, Dave Chinner wrote:
> > > You do realise that local filesystems can silently change the
> > > locat
On Wed, Oct 19, 2016 at 01:01:06PM -0700, Dan Williams wrote:
> >>
> >> In the cover letter, "[PATCH 0/3] iopmem : A block device for PCIe
> >> memory", it mentions that the lack of I/O coherency is a known issue
> >> and users of this functionality need to be cognizant of the pitfalls.
> >> If th
On Tue, October 25, 2016 3:19 pm, Dave Chinner wrote:
> On Tue, Oct 25, 2016 at 05:50:43AM -0600, Stephen Bates wrote:
>>
>> Dave are you saying that even for local mappings of files on a DAX
>> capable system it is possible for the mappings to move on you unless the
&
>>
>> The NVMe fabrics stuff could probably make use of this. It's an
>> in-kernel system to allow remote access to an NVMe device over RDMA. So
>> they ought to be able to optimize their transfers by DMAing directly to
>> the NVMe's CMB -- no userspace interface would be required but there
>> wou
Hi All
This has been a great thread (thanks to Alex for kicking it off) and I
wanted to jump in and maybe try and put some summary around the
discussion. I also wanted to propose we include this as a topic for LFS/MM
because I think we need more discussion on the best way to add this
functionality
>>> I've already recommended that iopmem not be a block device and
>>> instead be a device-dax instance. I also don't think it should claim
>>> the PCI ID, rather the driver that wants to map one of its bars this
>>> way can register the memory region with the device-dax core.
>>>
>>> I'm not sure
On Fri, January 6, 2017 4:10 pm, Logan Gunthorpe wrote:
>
>
> On 06/01/17 11:26 AM, Jason Gunthorpe wrote:
>
>
>> Make a generic API for all of this and you'd have my vote..
>>
>>
>> IMHO, you must support basic pinning semantics - that is necessary to
>> support generic short lived DMA (eg filesys
On 2017-04-06, 6:33 AM, "Sagi Grimberg" wrote:
> Say it's connected via 2 legs, the bar is accessed from leg A and the
> data from the disk comes via leg B. In this case, the data is heading
> towards the p2p device via leg B (might be congested), the completion
> goes directly to the RC, and the
> Yes, this makes sense I think we really just want to distinguish host
> memory or not in terms of the dev_pagemap type.
I would like to see mutually exclusive flags for host memory (or not) and
persistence (or not).
Stephen
___
Linux-nvdimm mailing
>> Yes, this makes sense I think we really just want to distinguish host
>> memory or not in terms of the dev_pagemap type.
>
>> I would like to see mutually exclusive flags for host memory (or not) and
>> persistence (or not).
>>
>
> Why persistence? It has zero meaning to the mm.
I like the ide
> My first reflex when reading this thread was to think that this whole domain
> lends it self excellently to testing via Qemu. Could it be that doing this in
> the opposite direction might be a safer approach in the long run even though
> (significant) more work up-front?
While the idea of QEM
>> Yes, that's why I used 'significant'. One good thing is that given resources
>> it can easily be done in parallel with other development, and will give
>> additional
>> insight of some form.
>
>Yup, well if someone wants to start working on an emulated RDMA device
>that actually simulates prop
62 matches
Mail list logo