On Sun, Mar 11, 2018 at 12:16:25PM -0700, Dan Williams wrote:
> I did the rename, and am housing these in fs/dax.c, I assume that's
> what you wanted.
libfs.c would seem ok to, but we're into micro-management land now :)
___
Linux-nvdimm mailing list
On Mon, Mar 12, 2018 at 7:17 AM, Jerome Glisse wrote:
> On Fri, Mar 09, 2018 at 10:55:26PM -0800, Dan Williams wrote:
>> The HMM sub-system extended dev_pagemap to arrange a callback when a
>> dev_pagemap managed page is freed. Since a dev_pagemap page is free /
>> idle when
This just covers the topology function of the EDAC driver.
We locate which DIMM slots are populated with NVDIMMs and
query the NFIT and SMBIOS tables to get the size.
Signed-off-by: Tony Luck
---
drivers/edac/Kconfig| 5 +++-
drivers/edac/skx_edac.c | 66
There are now non-volatile versions of DIMMs. Add a new entry to
"enum mem_type" and a new string in edac_mem_types[].
Signed-off-by: Tony Luck
---
drivers/edac/edac_mc.c | 3 ++-
include/linux/edac.h | 3 +++
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git
On Sun, Mar 11, 2018 at 10:15 AM, Dan Williams wrote:
> On Sun, Mar 11, 2018 at 4:27 AM, Peter Zijlstra wrote:
>> On Fri, Mar 09, 2018 at 10:55:32PM -0800, Dan Williams wrote:
>>> Add a generic facility for awaiting an atomic_t to reach a value of
For P2P requests, we must use the pci_p2pmem_[un]map_sg() functions
instead of the dma_map_sg functions.
With that, we can then indicate PCI_P2P support in the request queue.
For this, we create an NVME_F_PCI_P2P flag which tells the core to
set QUEUE_FLAG_PCI_P2P in the request queue.
For peer-to-peer transactions to work the downstream ports in each
switch must not have the ACS flags set. At this time there is no way
to dynamically change the flags and update the corresponding IOMMU
groups so this is done at enumeration time before the groups are
assigned.
This effectively
Introduce a quirk to use CMB-like memory on older devices that have
an exposed BAR but do not advertise support for using CMBLOC and
CMBSIZE.
We'd like to use some of these older cards to test P2P memory.
Signed-off-by: Logan Gunthorpe
Reviewed-by: Sagi Grimberg
Register the CMB buffer as p2pmem and use the appropriate allocation
functions to create and destroy the IO SQ.
If the CMB supports WDS and RDS, publish it for use as P2P memory
by other devices.
Signed-off-by: Logan Gunthorpe
---
drivers/nvme/host/pci.c | 75
In order to use PCI P2P memory pci_p2pmem_[un]map_sg() functions must be
called to map the correct PCI bus address.
To do this, check the first page in the scatter list to see if it is P2P
memory or not. At the moment, scatter lists that contain P2P memory must
be homogeneous so if the first page
QUEUE_FLAG_PCI_P2P is introduced meaning a driver's request queue
supports targeting P2P memory.
REQ_PCI_P2P is introduced to indicate a particular bio request is
directed to/from PCI P2P memory. A request with this flag is not
accepted unless the corresponding queues have the QUEUE_FLAG_PCI_P2P
Hi Everyone,
Here's v3 of our series to introduce P2P based copy offload to NVMe
fabrics. This version has been rebased onto v4.16-rc5.
Thanks,
Logan
Changes in v3:
* Many more fixes and minor cleanups that were spotted by Bjorn
* Additional explanation of the ACS change in both the commit
Some PCI devices may have memory mapped in a BAR space that's
intended for use in peer-to-peer transactions. In order to enable
such transactions the memory must be registered with ZONE_DEVICE pages
so it can be used by DMA interfaces in existing drivers.
Add an interface for other subsystems to
On Mon, 12 Mar 2018 13:35:19 -0600
Logan Gunthorpe wrote:
> Add a restructured text file describing how to write drivers
> with support for P2P DMA transactions. The document describes
> how to use the APIs that were added in the previous few
> commits.
>
> Also adds an
On Sat, Mar 10, 2018 at 02:22:17PM +0100, Jean Delvare wrote:
> Note that it is possible to store MB values (up to 16 MB) using kB as
> the unit. The specification allows for it, and a few systems use that
> option. For example [1], the DMI data of a Supermicro X8STi board looks
> like:
>
>
The original message was received at Tue, 13 Mar 2018 11:36:40 +0800
from lists.01.org [48.169.118.47]
- The following addresses had permanent fatal errors -
- Transcript of session follows -
while talking to lists.01.org.:
>>> MAIL From:"Bounced
On 3/12/2018 3:35 PM, Logan Gunthorpe wrote:
> +int pci_p2pdma_add_client(struct list_head *head, struct device *dev)
It feels like code tried to be a generic p2pdma provider first. Then got
converted to PCI, yet all dev parameters are still struct device.
Maybe, dev parameter should also be
On 3/12/2018 3:35 PM, Logan Gunthorpe wrote:
> - if (nvmeq->sq_cmds_io)
I think you should keep the code as it is for the case where
(!nvmeq->sq_cmds_is_io && nvmeq->sq_cmds_io)
You are changing the behavior for NVMe drives with CMB buffers.
You can change the if statement here with the
On 3/12/2018 9:55 PM, Sinan Kaya wrote:
> On 3/12/2018 3:35 PM, Logan Gunthorpe wrote:
>> -if (nvmeq->sq_cmds_io)
>
> I think you should keep the code as it is for the case where
> (!nvmeq->sq_cmds_is_io && nvmeq->sq_cmds_io)
Never mind. I misunderstood the code.
>
> You are changing the
On 3/12/2018 1:41 PM, Jonathan Corbet wrote:
This all seems good, but...could we consider moving this documentation to
driver-api/PCI as it's converted to RST? That would keep it together with
similar materials and bring a bit more coherence to Documentation/ as a
whole.
Yup, I'll change this
20 matches
Mail list logo