On 05/07/2018 07:33 PM, Huaisheng HS1 Ye wrote:
> diff --git a/mm/Kconfig b/mm/Kconfig
> index c782e8f..5fe1f63 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -687,6 +687,22 @@ config ZONE_DEVICE
>
> +config ZONE_NVM
> + bool "Manage NVDIMM (pmem) by memory management (EXPERIMENTAL)"
> +
On Mon, May 7, 2018 at 7:59 PM, Huaisheng HS1 Ye wrote:
>>
>>Dan Williams writes:
>>
>>> On Mon, May 7, 2018 at 11:46 AM, Matthew Wilcox
>>wrote:
On Mon, May 07, 2018 at 10:50:21PM +0800, Huaisheng Ye wrote:
>
The original message was received at Tue, 8 May 2018 11:10:40 +0800
from lists.01.org [140.215.174.77]
- The following addresses had permanent fatal errors -
- Transcript of session follows -
while talking to lists.01.org.:
>>> MAIL From:"Automatic
On Tue, May 08, 2018 at 02:59:40AM +, Huaisheng HS1 Ye wrote:
> Currently in our mind, an ideal use scenario is that, we put all page caches
> to
> zone_nvm, without any doubt, page cache is an efficient and common cache
> implement, but it has a disadvantage that all dirty data within it
>
>Dan Williams writes:
>
>> On Mon, May 7, 2018 at 11:46 AM, Matthew Wilcox
>wrote:
>>> On Mon, May 07, 2018 at 10:50:21PM +0800, Huaisheng Ye wrote:
Traditionally, NVDIMMs are treated by mm(memory management)
>subsystem as
DEVICE zone,
Create PTE, PMD, PUD and P4D levels page table mapping for physical
addresses of DRAM and NVDIMM both. Here E820_TYPE_PMEM represents
the region of e820_table.
Signed-off-by: Huaisheng Ye
Signed-off-by: Ocean He
---
arch/x86/mm/init_64.c | 16
DRAM and NVDIMM are divided into separate zones, thus NVM
zone is dedicated for NVDIMMs.
During zone_spanned_pages_in_node, spanned pages of zones
are calculated separately for DRAM and NVDIMM by flags
MEMBLOCK_NONE and MEMBLOCK_NVDIMM.
Signed-off-by: Huaisheng Ye
Expand ZONE_NVM into enum zone_type, and create GFP_NVM
which represents gfp_t flag for NVM zone.
Because there is no lower plain integer GFP bitmask can be
used for ___GFP_NVM, a workable way is to get space from
GFP_ZONE_BAD to fill ZONE_NVM into GFP_ZONE_TABLE.
Signed-off-by: Huaisheng Ye
This is used to expand the interface of get_pfn_range_for_nid with
flags of memblock, so mm can get pfn range with special flags.
Signed-off-by: Huaisheng Ye
Signed-off-by: Ocean He
---
include/linux/mm.h | 4
mm/page_alloc.c| 17 -
This patch makes mm to have capability to get special regions
from memblock.
During boot process, memblock marks NVDIMM regions with flag
MEMBLOCK_NVDIMM, also expands the interface of functions and
macros with flags.
Signed-off-by: Huaisheng Ye
Signed-off-by: Ocean He
>
> On Mon, May 07, 2018 at 10:50:21PM +0800, Huaisheng Ye wrote:
> > Traditionally, NVDIMMs are treated by mm(memory management)
> subsystem as
> > DEVICE zone, which is a virtual zone and both its start and end of pfn
> > are equal to 0, mm wouldn’t manage NVDIMM directly as DRAM, kernel
>
On Thu, May 03, 2018 at 04:53:18PM -0700, Dan Williams wrote:
> On Tue, Apr 24, 2018 at 4:33 PM, Dan Williams
> wrote:
> > Changes since v8 [1]:
> > * Rebase on v4.17-rc2
> >
> > * Fix get_user_pages_fast() for ZONE_DEVICE pages to revalidate the pte,
> > pmd, pud
> How do you envison merging this? There's a big chunk in drivers/pci, but
> really no opportunity for conflicts there, and there's significant stuff in
> block and nvme that I don't really want to merge.
>
> If Alex is OK with the ACS situation, I can ack the PCI parts and you could
> merge it
On Mon, Apr 23, 2018 at 05:30:32PM -0600, Logan Gunthorpe wrote:
> Hi Everyone,
>
> Here's v4 of our series to introduce P2P based copy offload to NVMe
> fabrics. This version has been rebased onto v4.17-rc2. A git repo
> is here:
>
> https://github.com/sbates130272/linux-p2pmem pci-p2p-v4
> ...
On Mon, Apr 23, 2018 at 05:30:38PM -0600, Logan Gunthorpe wrote:
> Add a restructured text file describing how to write drivers
> with support for P2P DMA transactions. The document describes
> how to use the APIs that were added in the previous few
> commits.
>
> Also adds an index for the PCI
[+to Alex]
Alex,
Are you happy with this strategy of turning off ACS based on
CONFIG_PCI_P2PDMA? We only check this at enumeration-time and
I don't know if there are other places we would care?
On Mon, Apr 23, 2018 at 05:30:36PM -0600, Logan Gunthorpe wrote:
> For peer-to-peer transactions to
Thanks for the review. I'll apply all of these for the changes for next
version of the set.
>> +/*
>> + * If a device is behind a switch, we try to find the upstream bridge
>> + * port of the switch. This requires two calls to pci_upstream_bridge():
>> + * one for the upstream port on the switch,
s/dma/DMA/ (in subject)
On Mon, Apr 23, 2018 at 05:30:35PM -0600, Logan Gunthorpe wrote:
> The DMA address used when mapping PCI P2P memory must be the PCI bus
> address. Thus, introduce pci_p2pmem_[un]map_sg() to map the correct
> addresses when using P2P memory.
>
> For this, we assume that an
On Mon, Apr 23, 2018 at 05:30:33PM -0600, Logan Gunthorpe wrote:
> Some PCI devices may have memory mapped in a BAR space that's
> intended for use in peer-to-peer transactions. In order to enable
> such transactions the memory must be registered with ZONE_DEVICE pages
> so it can be used by DMA
On Mon, May 7, 2018 at 12:18 PM, Matthew Wilcox wrote:
> On Mon, May 07, 2018 at 11:57:10AM -0700, Dan Williams wrote:
>> I think adding yet one more mm-zone is the wrong direction. Instead,
>> what we have been considering is a mechanism to allow a device-dax
>> instance to
On Mon, May 7, 2018 at 12:28 PM, Jeff Moyer wrote:
> Dan Williams writes:
[..]
>>> What's the use case?
>>
>> Use NVDIMMs as System-RAM given their potentially higher capacity than
>> DDR. The expectation in that case is that data is forfeit (not
>>
Dan Williams writes:
> On Mon, May 7, 2018 at 12:08 PM, Jeff Moyer wrote:
>> Dan Williams writes:
>>
>>> On Mon, May 7, 2018 at 11:46 AM, Matthew Wilcox wrote:
On Mon, May 07, 2018 at 10:50:21PM
On Mon, May 07, 2018 at 11:57:10AM -0700, Dan Williams wrote:
> I think adding yet one more mm-zone is the wrong direction. Instead,
> what we have been considering is a mechanism to allow a device-dax
> instance to be given back to the kernel as a distinct numa node
> managed by the VM. It seems
Dan Williams writes:
> On Mon, May 7, 2018 at 11:46 AM, Matthew Wilcox wrote:
>> On Mon, May 07, 2018 at 10:50:21PM +0800, Huaisheng Ye wrote:
>>> Traditionally, NVDIMMs are treated by mm(memory management) subsystem as
>>> DEVICE zone, which is a
On Mon, May 07, 2018 at 10:50:21PM +0800, Huaisheng Ye wrote:
> Traditionally, NVDIMMs are treated by mm(memory management) subsystem as
> DEVICE zone, which is a virtual zone and both its start and end of pfn
> are equal to 0, mm wouldn’t manage NVDIMM directly as DRAM, kernel uses
>
linux-nvdimm 见附 % 件
本年度最热议题,合为一伙该如何分配利益
从基础原理到模式设计到实操案例,既是思维的提升,又是实务的落地。
我们在一天课程当中为您解密 “股东合伙人,事业合伙人,生态合伙人”。
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
/commits/Baoquan-He/resource-Use-list_head-to-link-sibling-resource/20180507-144345
config: arm-allmodconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
wget
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O
~/bin/make.cross
/commits/Baoquan-He/resource-Use-list_head-to-link-sibling-resource/20180507-144345
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
wget
https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O
~/bin
The struct resource uses singly linked list to link siblings, implemented
by pointer operation. Replace it with list_head for better code readability.
Based on this list_head replacement, it will be very easy to do reverse
iteration on iomem_resource's sibling list in later patch.
Besides, type
This function, being a variant of walk_system_ram_res() introduced in
commit 8c86e70acead ("resource: provide new functions to walk through
resources"), walks through a list of all the resources of System RAM
in reversed order, i.e., from higher to lower.
It will be used in kexec_file code.
For kexec_file loading, if kexec_buf.top_down is 'true', the memory which
is used to load kernel/initrd/purgatory is supposed to be allocated from
top to down. This is what we have been doing all along in the old kexec
loading interface and the kexec loading is still default setting in some
This patchset is doing:
1) Replace struct resource's sibling list from singly linked list to
list_head. Clearing out those pointer operation within singly linked
list for better code readability.
2) Based on list_head replacement, add a new function
walk_system_ram_res_rev() which can does
32 matches
Mail list logo