Re: [Xen-devel] [RFC XEN PATCH v3 12/39] tools/xen-ndctl: add NVDIMM management util 'xen-ndctl'

2017-09-13 Thread Dan Williams
On Mon, Sep 11, 2017 at 2:24 PM, Konrad Rzeszutek Wilk
<konrad.w...@oracle.com> wrote:
> On Mon, Sep 11, 2017 at 09:35:08AM -0700, Dan Williams wrote:
>> On Sun, Sep 10, 2017 at 10:39 PM, Haozhong Zhang
>> <haozhong.zh...@intel.com> wrote:
>> > On 09/10/17 22:10 -0700, Dan Williams wrote:
>> >> On Sun, Sep 10, 2017 at 9:37 PM, Haozhong Zhang
>> >> <haozhong.zh...@intel.com> wrote:
>> >> > The kernel NVDIMM driver and the traditional NVDIMM management
>> >> > utilities in Dom0 does not work now. 'xen-ndctl' is added as an
>> >> > alternatively, which manages NVDIMM via Xen hypercalls.
>> >> >
>> >> > Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
>> >> > ---
>> >> > Cc: Ian Jackson <ian.jack...@eu.citrix.com>
>> >> > Cc: Wei Liu <wei.l...@citrix.com>
>> >> > ---
>> >> >  .gitignore |   1 +
>> >> >  tools/misc/Makefile|   4 ++
>> >> >  tools/misc/xen-ndctl.c | 172 
>> >> > +
>> >> >  3 files changed, 177 insertions(+)
>> >> >  create mode 100644 tools/misc/xen-ndctl.c
>> >>
>> >> What about my offer to move this functionality into the upstream ndctl
>> >> utility [1]? I think it is thoroughly confusing that you are reusing
>> >> the name 'ndctl' and avoiding integration with the upstream ndctl
>> >> utility.
>> >>
>> >> [1]: https://patchwork.kernel.org/patch/9632865/
>> >
>> > I'm not object to integrate it with ndctl.
>> >
>> > My only concern is that the integration will introduces two types of
>> > user interface. The upstream ndctl works with the kernel driver and
>> > provides easily used *names* (e.g., namespace0.0, region0, nmem0,
>> > etc.) for user input. However, this version patchset hides NFIT from
>> > Dom0 (to simplify the first implementation), so the kernel driver does
>> > not work in Dom0, neither does ndctl. Instead, xen-ndctl has to use
>> > *the physical address* for users to specify their interested NVDIMM
>> > region, which is different from upstream ndctl.
>>
>> Ok, I think this means that xen-ndctl should be renamed (xen-nvdimm?)
>> so that the distinction between the 2 tools is clear.
>
> I think it makes much more sense to integrate this in the upstream
> version of ndctl. As surely in the future the ndctl will need to work
> with other OSes too? Such as FreeBSD?

I'm receptive to carrying Xen-specific enabling and / or a FreeBSD
compat layer in ndctl.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC XEN PATCH v3 12/39] tools/xen-ndctl: add NVDIMM management util 'xen-ndctl'

2017-09-11 Thread Dan Williams
On Sun, Sep 10, 2017 at 10:39 PM, Haozhong Zhang
<haozhong.zh...@intel.com> wrote:
> On 09/10/17 22:10 -0700, Dan Williams wrote:
>> On Sun, Sep 10, 2017 at 9:37 PM, Haozhong Zhang
>> <haozhong.zh...@intel.com> wrote:
>> > The kernel NVDIMM driver and the traditional NVDIMM management
>> > utilities in Dom0 does not work now. 'xen-ndctl' is added as an
>> > alternatively, which manages NVDIMM via Xen hypercalls.
>> >
>> > Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
>> > ---
>> > Cc: Ian Jackson <ian.jack...@eu.citrix.com>
>> > Cc: Wei Liu <wei.l...@citrix.com>
>> > ---
>> >  .gitignore |   1 +
>> >  tools/misc/Makefile|   4 ++
>> >  tools/misc/xen-ndctl.c | 172 
>> > +
>> >  3 files changed, 177 insertions(+)
>> >  create mode 100644 tools/misc/xen-ndctl.c
>>
>> What about my offer to move this functionality into the upstream ndctl
>> utility [1]? I think it is thoroughly confusing that you are reusing
>> the name 'ndctl' and avoiding integration with the upstream ndctl
>> utility.
>>
>> [1]: https://patchwork.kernel.org/patch/9632865/
>
> I'm not object to integrate it with ndctl.
>
> My only concern is that the integration will introduces two types of
> user interface. The upstream ndctl works with the kernel driver and
> provides easily used *names* (e.g., namespace0.0, region0, nmem0,
> etc.) for user input. However, this version patchset hides NFIT from
> Dom0 (to simplify the first implementation), so the kernel driver does
> not work in Dom0, neither does ndctl. Instead, xen-ndctl has to use
> *the physical address* for users to specify their interested NVDIMM
> region, which is different from upstream ndctl.

Ok, I think this means that xen-ndctl should be renamed (xen-nvdimm?)
so that the distinction between the 2 tools is clear.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC XEN PATCH v3 12/39] tools/xen-ndctl: add NVDIMM management util 'xen-ndctl'

2017-09-10 Thread Dan Williams
On Sun, Sep 10, 2017 at 9:37 PM, Haozhong Zhang
 wrote:
> The kernel NVDIMM driver and the traditional NVDIMM management
> utilities in Dom0 does not work now. 'xen-ndctl' is added as an
> alternatively, which manages NVDIMM via Xen hypercalls.
>
> Signed-off-by: Haozhong Zhang 
> ---
> Cc: Ian Jackson 
> Cc: Wei Liu 
> ---
>  .gitignore |   1 +
>  tools/misc/Makefile|   4 ++
>  tools/misc/xen-ndctl.c | 172 
> +
>  3 files changed, 177 insertions(+)
>  create mode 100644 tools/misc/xen-ndctl.c

What about my offer to move this functionality into the upstream ndctl
utility [1]? I think it is thoroughly confusing that you are reusing
the name 'ndctl' and avoiding integration with the upstream ndctl
utility.

[1]: https://patchwork.kernel.org/patch/9632865/

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v6 4/5] fs, xfs: introduce MAP_DIRECT for creating block-map-atomic file ranges

2017-08-24 Thread Dan Williams
On Thu, Aug 24, 2017 at 9:39 AM, Christoph Hellwig <h...@lst.de> wrote:
> On Thu, Aug 24, 2017 at 09:31:17AM -0700, Dan Williams wrote:
>> External agent is a DMA device, or a hypervisor like Xen. In the DMA
>> case perhaps we can use the fcntl lease mechanism, I'll investigate.
>> In the Xen case it actually would need to use fiemap() to discover the
>> physical addresses that back the file to setup their M2P tables.
>> Here's the discussion where we discovered that physical address
>> dependency:
>>
>> https://lists.xen.org/archives/html/xen-devel/2017-04/msg00419.html
>
> fiemap does not work to discover physical addresses.  If they want
> to do anything involving physical address they will need a kernel
> driver.

True, it's broken with respect to multi-device filesystems and these
patches do nothing to fix that problem. Ok, I'm fine to let that use
case depend on a kernel driver and just focus on fixing the DMA case.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v6 4/5] fs, xfs: introduce MAP_DIRECT for creating block-map-atomic file ranges

2017-08-24 Thread Dan Williams
[ adding Xen ]

On Thu, Aug 24, 2017 at 9:11 AM, Christoph Hellwig  wrote:
> I still can't make any sense of this description.  What is an external
> agent?  Userspace obviously can't ever see a change in the extent
> map, so it can't be meant.

External agent is a DMA device, or a hypervisor like Xen. In the DMA
case perhaps we can use the fcntl lease mechanism, I'll investigate.
In the Xen case it actually would need to use fiemap() to discover the
physical addresses that back the file to setup their M2P tables.
Here's the discussion where we discovered that physical address
dependency:

https://lists.xen.org/archives/html/xen-devel/2017-04/msg00419.html

> It would help a lot if you could come up with a concrete user for this,
> including example code.

Will do.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains

2017-04-04 Thread Dan Williams
>> I don't think KVM has the same issue, but honestly I don't have the
>> full mental model of how KVM supports mmap. I've at least been able to
>> run a guest where the "pmem" is just dynamic page cache on the host
>> side so the physical memory mapping is changing all the time due to
>> swap. KVM does not have this third-party M2P mapping table to keep up
>> to date so I assume it is just handled by the standard mmap support
>> for establishing a guest physical address range and the standard
>> mapping-invalidate + remap mechanism just works.
>
> Could it be possible to have an Xen driver that would listen on
> these notifications and percolate those changes this driver. Then
> this driver would make the appropiate hypercalls to update the M2P ?
>
> That would solve the 2/ I think?

I think that could work. That sounds like userfaultfd support for DAX
which is something I want to take a look at in the next couple kernel
cycles for other reasons like live migration of guest-VMs with DAX
mappings.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains

2017-04-04 Thread Dan Williams
On Tue, Apr 4, 2017 at 10:34 AM, Konrad Rzeszutek Wilk
<konrad.w...@oracle.com> wrote:
> On Tue, Apr 04, 2017 at 10:16:41AM -0700, Dan Williams wrote:
>> On Tue, Apr 4, 2017 at 10:00 AM, Konrad Rzeszutek Wilk
>> <konrad.w...@oracle.com> wrote:
>> > On Sat, Apr 01, 2017 at 08:45:45AM -0700, Dan Williams wrote:
>> >> On Sat, Apr 1, 2017 at 4:54 AM, Konrad Rzeszutek Wilk <kon...@darnok.org> 
>> >> wrote:
>> >> > ..snip..
>> >> >> >> Is there a resource I can read more about why the hypervisor needs 
>> >> >> >> to
>> >> >> >> have this M2P mapping for nvdimm support?
>> >> >> >
>> >> >> > M2P is basically an array of frame numbers. It's indexed by the host
>> >> >> > page frame number, or the machine frame number (MFN) in Xen's
>> >> >> > definition. The n'th entry records the guest page frame number that 
>> >> >> > is
>> >> >> > mapped to MFN n. M2P is one of the core data structures used in Xen
>> >> >> > memory management, and is used to convert MFN to guest PFN. A
>> >> >> > read-only version of M2P is also exposed as part of ABI to guest. In
>> >> >> > the previous design discussion, we decided to put the management of
>> >> >> > NVDIMM in the existing Xen memory management as much as possible, so
>> >> >> > we need to build M2P for NVDIMM as well.
>> >> >> >
>> >> >>
>> >> >> Thanks, but what I don't understand is why this M2P lookup is needed?
>> >> >
>> >> > Xen uses it to construct the EPT page tables for the guests.
>> >> >
>> >> >> Does Xen establish this metadata for PCI mmio ranges as well? What Xen
>> >> >
>> >> > It doesn't have that (M2P) for PCI MMIO ranges. For those it has an
>> >> > ranges construct (since those are usually contingous and given
>> >> > in ranges to a guest).
>> >>
>> >> So, I'm confused again. This patchset / enabling requires both M2P and
>> >> contiguous PMEM ranges. If the PMEM is contiguous it seems you don't
>> >> need M2P and can just reuse the MMIO enabling, or am I missing
>> >> something?
>> >
>> > I think I am confusing you.
>> >
>> > The patchset (specifically 04/15] xen/x86: add 
>> > XEN_SYSCTL_nvdimm_pmem_setup to setup host pmem )
>> > adds a hypercall to tell Xen where on the NVDIMM it can put
>> > the M2P array and as well the frametables ('struct page').
>> >
>> > There is no range support. The reason is that if break up
>> > an NVDIMM in various chunks (and then put a filesystem on top of it) - and
>> > then figure out which of the SPAs belong to the file - and then
>> > "expose" that file to a guest as NVDIMM - it's SPAs won't
>> > be contingous. Hence the hypervisor would need to break down
>> > the 'ranges' structure down in either a bitmap or an M2P
>> > and also store it. This can get quite tricky so you may
>> > as well just start with an M2P and 'struct page'.
>>
>> Ok... but the problem then becomes that the filesystem is free to
>> change the file-offset to SPA mapping any time it wants. So the M2P
>> support is broken if it expects static relationships.
>
> Can't you flock an file and that will freeze it? Or mlock it since
> one is rather mmap-ing it?

Unfortunately no. This dovetails with the discussion we have been
having with filesystem folks about the need to call msync() after
every write. Whenever the filesystem sees a write fault it is free to
move blocks around in the file, think allocation or copy-on-write
operations like reflink. The filesystem depends on the application
calling msync/fsync before it makes the writes from those faults
durable against crash / powerloss.  Also, actions like online defrag
can change these offset to physical address relationships without any
involvement from the application. There's currently no mechanism to
lock out this behavior because the filesystem assumes that it can just
invalidate mappings to make the application re-fault.

>>
>> > The placement of those datastructures is "
>> > v2 patch
>> >series relies on users/admins in Dom0 instead of Dom0 driver to 
>> > indicate the
>> >location to store the frametable and M2P of pmem.
>> > "
>> >
>> > Hope this helps?
>>
>> It does, but it still see

Re: [Xen-devel] [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains

2017-04-04 Thread Dan Williams
On Tue, Apr 4, 2017 at 10:00 AM, Konrad Rzeszutek Wilk
<konrad.w...@oracle.com> wrote:
> On Sat, Apr 01, 2017 at 08:45:45AM -0700, Dan Williams wrote:
>> On Sat, Apr 1, 2017 at 4:54 AM, Konrad Rzeszutek Wilk <kon...@darnok.org> 
>> wrote:
>> > ..snip..
>> >> >> Is there a resource I can read more about why the hypervisor needs to
>> >> >> have this M2P mapping for nvdimm support?
>> >> >
>> >> > M2P is basically an array of frame numbers. It's indexed by the host
>> >> > page frame number, or the machine frame number (MFN) in Xen's
>> >> > definition. The n'th entry records the guest page frame number that is
>> >> > mapped to MFN n. M2P is one of the core data structures used in Xen
>> >> > memory management, and is used to convert MFN to guest PFN. A
>> >> > read-only version of M2P is also exposed as part of ABI to guest. In
>> >> > the previous design discussion, we decided to put the management of
>> >> > NVDIMM in the existing Xen memory management as much as possible, so
>> >> > we need to build M2P for NVDIMM as well.
>> >> >
>> >>
>> >> Thanks, but what I don't understand is why this M2P lookup is needed?
>> >
>> > Xen uses it to construct the EPT page tables for the guests.
>> >
>> >> Does Xen establish this metadata for PCI mmio ranges as well? What Xen
>> >
>> > It doesn't have that (M2P) for PCI MMIO ranges. For those it has an
>> > ranges construct (since those are usually contingous and given
>> > in ranges to a guest).
>>
>> So, I'm confused again. This patchset / enabling requires both M2P and
>> contiguous PMEM ranges. If the PMEM is contiguous it seems you don't
>> need M2P and can just reuse the MMIO enabling, or am I missing
>> something?
>
> I think I am confusing you.
>
> The patchset (specifically 04/15] xen/x86: add XEN_SYSCTL_nvdimm_pmem_setup 
> to setup host pmem )
> adds a hypercall to tell Xen where on the NVDIMM it can put
> the M2P array and as well the frametables ('struct page').
>
> There is no range support. The reason is that if break up
> an NVDIMM in various chunks (and then put a filesystem on top of it) - and
> then figure out which of the SPAs belong to the file - and then
> "expose" that file to a guest as NVDIMM - it's SPAs won't
> be contingous. Hence the hypervisor would need to break down
> the 'ranges' structure down in either a bitmap or an M2P
> and also store it. This can get quite tricky so you may
> as well just start with an M2P and 'struct page'.

Ok... but the problem then becomes that the filesystem is free to
change the file-offset to SPA mapping any time it wants. So the M2P
support is broken if it expects static relationships.

> The placement of those datastructures is "
> v2 patch
>series relies on users/admins in Dom0 instead of Dom0 driver to indicate 
> the
>location to store the frametable and M2P of pmem.
> "
>
> Hope this helps?

It does, but it still seems we're stuck between either 1/ not needing
M2P if we can pass a whole pmem-namespace through to the guest or 2/
M2P being broken by non-static file-offset to physical address
mappings.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains

2017-04-01 Thread Dan Williams
On Sat, Apr 1, 2017 at 4:54 AM, Konrad Rzeszutek Wilk  wrote:
> ..snip..
>> >> Is there a resource I can read more about why the hypervisor needs to
>> >> have this M2P mapping for nvdimm support?
>> >
>> > M2P is basically an array of frame numbers. It's indexed by the host
>> > page frame number, or the machine frame number (MFN) in Xen's
>> > definition. The n'th entry records the guest page frame number that is
>> > mapped to MFN n. M2P is one of the core data structures used in Xen
>> > memory management, and is used to convert MFN to guest PFN. A
>> > read-only version of M2P is also exposed as part of ABI to guest. In
>> > the previous design discussion, we decided to put the management of
>> > NVDIMM in the existing Xen memory management as much as possible, so
>> > we need to build M2P for NVDIMM as well.
>> >
>>
>> Thanks, but what I don't understand is why this M2P lookup is needed?
>
> Xen uses it to construct the EPT page tables for the guests.
>
>> Does Xen establish this metadata for PCI mmio ranges as well? What Xen
>
> It doesn't have that (M2P) for PCI MMIO ranges. For those it has an
> ranges construct (since those are usually contingous and given
> in ranges to a guest).

So, I'm confused again. This patchset / enabling requires both M2P and
contiguous PMEM ranges. If the PMEM is contiguous it seems you don't
need M2P and can just reuse the MMIO enabling, or am I missing
something?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains

2017-03-30 Thread Dan Williams
On Thu, Mar 30, 2017 at 1:21 AM, Haozhong Zhang
<haozhong.zh...@intel.com> wrote:
> On 03/29/17 21:20 -0700, Dan Williams wrote:
>> On Sun, Mar 19, 2017 at 5:09 PM, Haozhong Zhang
>> <haozhong.zh...@intel.com> wrote:
>> > This is v2 RFC patch series to add vNVDIMM support to HVM domains.
>> > v1 can be found at 
>> > https://lists.xenproject.org/archives/html/xen-devel/2016-10/msg00424.html.
>> >
>> > No label and no _DSM except function 0 "query implemented functions"
>> > is supported by this version, but they will be added by future patches.
>> >
>> > The corresponding Qemu patch series is sent in another thread
>> > "[RFC QEMU PATCH v2 00/10] Implement vNVDIMM for Xen HVM guest".
>> >
>> > All patch series can be found at
>> >   Xen:  https://github.com/hzzhan9/xen.git nvdimm-rfc-v2
>> >   Qemu: https://github.com/hzzhan9/qemu.git xen-nvdimm-rfc-v2
>> >
>> > Changes in v2
>> > ==
>> >
>> > - One of the primary changes in v2 is dropping the linux kernel
>> >   patches, which were used to reserve on host pmem for placing its
>> >   frametable and M2P table. In v2, we add a management tool xen-ndctl
>> >   which is used in Dom0 to notify Xen hypervisor of which storage can
>> >   be used to manage the host pmem.
>> >
>> >   For example,
>> >   1.   xen-ndctl setup 0x24 0x38 0x38 0x3c
>> > tells Xen hypervisor to use host pmem pages at MFN 0x38 ~
>> > 0x3c to manage host pmem pages at MFN 0x24 ~ 0x38.
>> > I.e. the former is used to place the frame table and M2P table of
>> > both ranges of pmem pages.
>> >
>> >   2.   xen-ndctl setup 0x24 0x38
>> > tells Xen hypervisor to use the regular RAM to manage the host
>> > pmem pages at MFN 0x24 ~ 0x38. I.e the regular RMA is used
>> > to place the frame table and M2P table.
>> >
>> > - Another primary change in v2 is dropping the support to map files on
>> >   the host pmem to HVM domains as virtual NVDIMMs, as I cannot find a
>> >   stable to fix the fiemap of host files. Instead, we can rely on the
>> >   ability added in Linux kernel v4.9 that enables creating multiple
>> >   pmem namespaces on a single nvdimm interleave set.
>>
>> This restriction is unfortunate, and it seems to limit the future
>> architecture of the pmem driver. We may not always be able to
>> guarantee a contiguous physical address range to Xen for a given
>> namespace and may want to concatenate disjoint physical address ranges
>> into a logically contiguous namespace.
>>
>
> The hypervisor code that actual maps host pmem address to guest does
> not require the host address be contiguous. We can modify the
> toolstack code that get the address range from a namespace to support
> passing multiple address ranges to Xen hypervisor
>
>> Is there a resource I can read more about why the hypervisor needs to
>> have this M2P mapping for nvdimm support?
>
> M2P is basically an array of frame numbers. It's indexed by the host
> page frame number, or the machine frame number (MFN) in Xen's
> definition. The n'th entry records the guest page frame number that is
> mapped to MFN n. M2P is one of the core data structures used in Xen
> memory management, and is used to convert MFN to guest PFN. A
> read-only version of M2P is also exposed as part of ABI to guest. In
> the previous design discussion, we decided to put the management of
> NVDIMM in the existing Xen memory management as much as possible, so
> we need to build M2P for NVDIMM as well.
>

Thanks, but what I don't understand is why this M2P lookup is needed?
Does Xen establish this metadata for PCI mmio ranges as well? What Xen
memory management operations does this enable? Sorry if these are
basic Xen questions, I'm just looking to see if we can make the
mapping support more dynamic. For example, what if we wanted to change
the MFN to guest PFN relationship after every fault?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains

2017-03-29 Thread Dan Williams
On Sun, Mar 19, 2017 at 5:09 PM, Haozhong Zhang
 wrote:
> This is v2 RFC patch series to add vNVDIMM support to HVM domains.
> v1 can be found at 
> https://lists.xenproject.org/archives/html/xen-devel/2016-10/msg00424.html.
>
> No label and no _DSM except function 0 "query implemented functions"
> is supported by this version, but they will be added by future patches.
>
> The corresponding Qemu patch series is sent in another thread
> "[RFC QEMU PATCH v2 00/10] Implement vNVDIMM for Xen HVM guest".
>
> All patch series can be found at
>   Xen:  https://github.com/hzzhan9/xen.git nvdimm-rfc-v2
>   Qemu: https://github.com/hzzhan9/qemu.git xen-nvdimm-rfc-v2
>
> Changes in v2
> ==
>
> - One of the primary changes in v2 is dropping the linux kernel
>   patches, which were used to reserve on host pmem for placing its
>   frametable and M2P table. In v2, we add a management tool xen-ndctl
>   which is used in Dom0 to notify Xen hypervisor of which storage can
>   be used to manage the host pmem.
>
>   For example,
>   1.   xen-ndctl setup 0x24 0x38 0x38 0x3c
> tells Xen hypervisor to use host pmem pages at MFN 0x38 ~
> 0x3c to manage host pmem pages at MFN 0x24 ~ 0x38.
> I.e. the former is used to place the frame table and M2P table of
> both ranges of pmem pages.
>
>   2.   xen-ndctl setup 0x24 0x38
> tells Xen hypervisor to use the regular RAM to manage the host
> pmem pages at MFN 0x24 ~ 0x38. I.e the regular RMA is used
> to place the frame table and M2P table.
>
> - Another primary change in v2 is dropping the support to map files on
>   the host pmem to HVM domains as virtual NVDIMMs, as I cannot find a
>   stable to fix the fiemap of host files. Instead, we can rely on the
>   ability added in Linux kernel v4.9 that enables creating multiple
>   pmem namespaces on a single nvdimm interleave set.

This restriction is unfortunate, and it seems to limit the future
architecture of the pmem driver. We may not always be able to
guarantee a contiguous physical address range to Xen for a given
namespace and may want to concatenate disjoint physical address ranges
into a logically contiguous namespace.

Is there a resource I can read more about why the hypervisor needs to
have this M2P mapping for nvdimm support?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen

2016-10-13 Thread Dan Williams
On Thu, Oct 13, 2016 at 9:01 AM, Andrew Cooper
<andrew.coop...@citrix.com> wrote:
> On 13/10/16 16:40, Dan Williams wrote:
>> On Thu, Oct 13, 2016 at 2:08 AM, Jan Beulich <jbeul...@suse.com> wrote:
>> [..]
>>>> I think we can do the similar for Xen, like to lay another pseudo
>>>> device on /dev/pmem and do the reservation, like 2. in my previous
>>>> reply.
>>> Well, my opinion certainly doesn't count much here, but I continue to
>>> consider this a bad idea. For entities like drivers it may well be
>>> appropriate, but I think there ought to be an independent concept
>>> of "OS reserved", and in the Xen case this could then be shared
>>> between hypervisor and Dom0 kernel. Or if we were to consider Dom0
>>> "just a guest", things should even be the other way around: Xen gets
>>> all of the OS reserved space, and Dom0 needs something custom.
>> You haven't made the case why Xen is special and other applications of
>> persistent memory are not.
>
> In a Xen system, Xen runs in the baremetal root-mode ring0, and dom0 is
> a VM running in ring1/3 with the nvdimm driver.  This is the opposite
> way around to the KVM model.
>
> Dom0, being the hardware domain, has default ownership of all the
> hardware, but to gain access in the first place, it must request a
> mapping from Xen.

This is where my understanding the Xen model breaks down.  Are you
saying dom0 can't access the persistent memory range unless the ring0
agent has metadata storage space for tracking what it maps into dom0?
That can't be true because then PCI memory ranges would not work
without metadata reserve space.  Dom0 still needs to map and write the
DIMMs to even set up the struct page reservation, it isn't established
by default.

> Xen therefore needs to know and cope with being able
> to give dom0 a mapping to the nvdimms, without touching the content of
> the nvidmm itself (so as to avoid corrupting data).

Is it true that this metadata only comes into use when remapping the
dom0 discovered range(s) into a guest VM?

> Once dom0 has a mapping of the nvdimm, the nvdimm driver can go to work
> and figure out what is on the DIMM, and which areas are safe to use.

I don't understand this ordering of events.  Dom0 needs to have a
mapping to even write the on-media structure to indicate a
reservation.  So, initial dom0 access can't depend on metadata
reservation already being present.

> At this point, a Xen subsystem in Linux could choose one or more areas
> to hand back to the hypervisor to use as RAM/other.

To me all this configuration seems to come after the fact.  After dom0
sees /dev/pmemX devices, then it can go to work carving it up and
writing Xen specific metadata to the range(s).  The struct page
reservation never comes into the picture.  In fact, a raw mode
namespace (one without a reservation) could be used in this model, the
nvdimm core never needs to know what is happening.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen

2016-10-13 Thread Dan Williams
On Thu, Oct 13, 2016 at 2:08 AM, Jan Beulich  wrote:
[..]
>> I think we can do the similar for Xen, like to lay another pseudo
>> device on /dev/pmem and do the reservation, like 2. in my previous
>> reply.
>
> Well, my opinion certainly doesn't count much here, but I continue to
> consider this a bad idea. For entities like drivers it may well be
> appropriate, but I think there ought to be an independent concept
> of "OS reserved", and in the Xen case this could then be shared
> between hypervisor and Dom0 kernel. Or if we were to consider Dom0
> "just a guest", things should even be the other way around: Xen gets
> all of the OS reserved space, and Dom0 needs something custom.

You haven't made the case why Xen is special and other applications of
persistent memory are not.  The current struct page reservation
supports fundamental address-ability of persistent memory namespaces
for the rest of the kernel.  The Xen reservation is application
specific.  XFS, EXT4, and DM also have application specific usages of
persistent memory and consume metadata space out of a block device. If
we don't need an XFS-mode nvdimm device, why do we need Xen-mode?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen

2016-10-12 Thread Dan Williams
On Wed, Oct 12, 2016 at 9:01 AM, Jan Beulich  wrote:
 On 12.10.16 at 17:42,  wrote:
>> On Wed, Oct 12, 2016 at 8:39 AM, Jan Beulich  wrote:
>> On 12.10.16 at 16:58,  wrote:
 On 10/12/16 05:32 -0600, Jan Beulich wrote:
 On 12.10.16 at 12:33,  wrote:
>> The layout is shown as the following diagram.
>>
>> +---+---+---+--+--+
>> | whatever used | Partition | Super | Reserved | /dev/pmem0p1 |
>> |  by kernel|   Table   | Block | for Xen  |  |
>> +---+---+---+--+--+
>> \_ ___/
>>   V
>>  /dev/pmem0
>
>I have to admit that I dislike this, for not being OS-agnostic.
>Neither should there be any Xen-specific region, nor should the
>"whatever used by kernel" one be restricted to just Linux. What
>I could see is an OS-reserved area ahead of the partition table,
>the exact usage of which depends on which OS is currently
>running (and in the Xen case this might be both Xen _and_ the
>Dom0 kernel, arbitrated by a tbd protocol). After all, when
>running under Xen, the Dom0 may not have a need for as much
>control data as it has when running on bare hardware, for it
>controlling less (if any) of the actual memory ranges when Xen
>is present.
>

 Isn't this OS-reserved area still not OS-agnostic, as it requires OS
 to know where the reserved area is?  Or do you mean it's not if it's
 defined by a protocol that is accepted by all OSes?
>>>
>>> The latter - we clearly won't get away without some agreement on
>>> where to retrieve position and size of this area. I was simply
>>> assuming that such a protocol already exists.
>>>
>>
>> No, we should not mix the struct page reservation that the Dom0 kernel
>> may actively use with the Xen reservation that the Dom0 kernel does
>> not consume.  Explain again what is wrong with the partition approach?
>
> Not sure what was unclear in my previous reply. I don't think there
> should be apriori knowledge of whether Xen is (going to be) used on
> a system, and even if it gets used, but just occasionally, it would
> (apart from the abstract considerations already given) be a waste
> of resources to set something aside that could be used for other
> purposes while Xen is not running. Static partitioning should only be
> needed for persistent data.
>

The reservation needs to be persistent / static even if the data is
volatile, as is the case with struct page, because we can't have the
size of the device change depending on use.  So, from the aspect of
wasting space while Xen is not in use, both partitions and the
intrinsic reservation approach suffer the same problem. Setting that
aside I don't want to mix 2 different use cases into the same
reservation.

The kernel needs to know about the struct page reservation because it
needs to manage the lifetime of page references vs the lifetime of the
device.  It does not have the same relationship with a Xen reservation
which is why I'm proposing they be managed separately.

Note that Toshi and Mike added DM for DAX.  This enabling ends up
writing DM metadata on the device without adding new reservation
mechanisms to the nvdimm core.  I'm struggling to see how the Xen use
case is materially different DM.  In the end it's an application
specific metadata space.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen

2016-10-12 Thread Dan Williams
On Wed, Oct 12, 2016 at 8:39 AM, Jan Beulich  wrote:
 On 12.10.16 at 16:58,  wrote:
>> On 10/12/16 05:32 -0600, Jan Beulich wrote:
>> On 12.10.16 at 12:33,  wrote:
 The layout is shown as the following diagram.

 +---+---+---+--+--+
 | whatever used | Partition | Super | Reserved | /dev/pmem0p1 |
 |  by kernel|   Table   | Block | for Xen  |  |
 +---+---+---+--+--+
 \_ ___/
   V
  /dev/pmem0
>>>
>>>I have to admit that I dislike this, for not being OS-agnostic.
>>>Neither should there be any Xen-specific region, nor should the
>>>"whatever used by kernel" one be restricted to just Linux. What
>>>I could see is an OS-reserved area ahead of the partition table,
>>>the exact usage of which depends on which OS is currently
>>>running (and in the Xen case this might be both Xen _and_ the
>>>Dom0 kernel, arbitrated by a tbd protocol). After all, when
>>>running under Xen, the Dom0 may not have a need for as much
>>>control data as it has when running on bare hardware, for it
>>>controlling less (if any) of the actual memory ranges when Xen
>>>is present.
>>>
>>
>> Isn't this OS-reserved area still not OS-agnostic, as it requires OS
>> to know where the reserved area is?  Or do you mean it's not if it's
>> defined by a protocol that is accepted by all OSes?
>
> The latter - we clearly won't get away without some agreement on
> where to retrieve position and size of this area. I was simply
> assuming that such a protocol already exists.
>

No, we should not mix the struct page reservation that the Dom0 kernel
may actively use with the Xen reservation that the Dom0 kernel does
not consume.  Explain again what is wrong with the partition approach?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen

2016-10-11 Thread Dan Williams
On Tue, Oct 11, 2016 at 12:48 PM, Konrad Rzeszutek Wilk
<konrad.w...@oracle.com> wrote:
> On Tue, Oct 11, 2016 at 12:28:56PM -0700, Dan Williams wrote:
>> On Tue, Oct 11, 2016 at 11:33 AM, Konrad Rzeszutek Wilk
>> <konrad.w...@oracle.com> wrote:
>> > On Tue, Oct 11, 2016 at 10:51:19AM -0700, Dan Williams wrote:
>> [..]
>> >> Right, but why does the libnvdimm core need to know about this
>> >> specific Xen reservation?  For example, if Xen wants some in-kernel
>> >
>> > Let me turn this around - why does the libnvdimm core need to know about
>> > Linux specific parts? Shouldn't this be OS agnostic, so that FreeBSD
>> > for example can also poke a hole in this and fill it with its
>> > OS-management meta-data?
>>
>> Specifically the core needs to know so that it can answer the Linux
>> specific question of whether the pfn returned by ->direct_access() has
>> a corresponding struct page or not. It's tied to the lifetime of the
>> device and the usage of the reservation needs to be coordinated
>> against the references of those pages.  If FreeBSD decides it needs to
>> reserve "struct page" capacity at the start of the device, I would
>> hope that it reuses the same on-device info block that Linux is using
>> and not create a new "FreeBSD-mode" device type.
>
> The issue here (as I understand, I may be missing something new)
> is that the size of this special namespace may be different. That is
> the 'struct page' on FreeBSD could be 256 bytes while on Linux it is
> 64 bytes (numbers pulled out of the sky).
>
> Hence one would have to expand or such to re-use this.

Sure, but we could support that today.  If FreeBSD lays down the info
block it is free to make a bigger reservation and Linux would be happy
to use a smaller subset.  If we, as an industry, want this "struct
page" reservation to be common we can take it to a standards body to
make as a cross-OS guarantee... but I think this is separate from the
Xen reservation.

>> To be honest I do not yet understand what metadata Xen wants to store
>> in the device, but it seems the producer and consumer of that metadata
>> is Xen itself and not the wider Linux kernel as is the case with
>> struct page.  Can you fill me in on what problem Xen solves with this
>
> Exactly!
>> reservation?
>
> The same as Linux - its variant of 'struct page'. Which I think is
> smaller than the Linux one, but perhaps it is not?
>

If the hypervisor needs to know where it can store some metadata, can
that be satisfied with userspace tooling in Dom0? Something like,
"/dev/pmem0p1 == Xen metadata" and "/dev/pmem0p2 == DAX filesystem
with files to hand to guests".  So my question is not about the
rationale for having metadata, it's why does the Linux kernel need to
know about the Xen reservation? As far as I can see it is independent
/ opaque to the kernel.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen

2016-10-11 Thread Dan Williams
On Tue, Oct 11, 2016 at 11:33 AM, Konrad Rzeszutek Wilk
<konrad.w...@oracle.com> wrote:
> On Tue, Oct 11, 2016 at 10:51:19AM -0700, Dan Williams wrote:
[..]
>> Right, but why does the libnvdimm core need to know about this
>> specific Xen reservation?  For example, if Xen wants some in-kernel
>
> Let me turn this around - why does the libnvdimm core need to know about
> Linux specific parts? Shouldn't this be OS agnostic, so that FreeBSD
> for example can also poke a hole in this and fill it with its
> OS-management meta-data?

Specifically the core needs to know so that it can answer the Linux
specific question of whether the pfn returned by ->direct_access() has
a corresponding struct page or not. It's tied to the lifetime of the
device and the usage of the reservation needs to be coordinated
against the references of those pages.  If FreeBSD decides it needs to
reserve "struct page" capacity at the start of the device, I would
hope that it reuses the same on-device info block that Linux is using
and not create a new "FreeBSD-mode" device type.

To be honest I do not yet understand what metadata Xen wants to store
in the device, but it seems the producer and consumer of that metadata
is Xen itself and not the wider Linux kernel as is the case with
struct page.  Can you fill me in on what problem Xen solves with this
reservation?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen

2016-10-11 Thread Dan Williams
On Tue, Oct 11, 2016 at 9:58 AM, Konrad Rzeszutek Wilk
<konrad.w...@oracle.com> wrote:
> On Tue, Oct 11, 2016 at 08:53:33AM -0700, Dan Williams wrote:
>> On Tue, Oct 11, 2016 at 6:08 AM, Jan Beulich <jbeul...@suse.com> wrote:
>> >>>> Andrew Cooper <andrew.coop...@citrix.com> 10/10/16 6:44 PM >>>
>> >>On 10/10/16 01:35, Haozhong Zhang wrote:
>> >>> Xen hypervisor needs assistance from Dom0 Linux kernel for following 
>> >>> tasks:
>> >>> 1) Reserve an area on NVDIMM devices for Xen hypervisor to place
>> >>>memory management data structures, i.e. frame table and M2P table.
>> >>> 2) Report SPA ranges of NVDIMM devices and the reserved area to Xen
>> >>>hypervisor.
>> >>
>> >>However, I can't see any justification for 1).  Dom0 should not be
>> >>involved in Xen's management of its own frame table and m2p.  The mfns
>> >>making up the pmem/pblk regions should be treated just like any other
>> >>MMIO regions, and be handed wholesale to dom0 by default.
>> >
>> > That precludes the use as RAM extension, and I thought earlier rounds of
>> > discussion had got everyone in agreement that at least for the pmem case
>> > we will need some control data in Xen.
>>
>> The missing piece for me is why this reservation for control data
>> needs to be done in the libnvdimm core?  I would expect that any dax
>
> Isn't it done this way with Linux? That is say if the machine has
> 4GB of RAM and the NVDIMM is in TB range. You want to put the 'struct page'
> for the NVDIMM ranges somewhere. That place can be in regions on the
> NVDIMM that ndctl can reserve.

Yes.

>> capable file could be mapped and made available to a guest.  This
>> includes /dev/ramX devices that are dax capable, but are external to
>> the libnvdimm sub-system.
>
> This is more of just keeping track of the ranges if say the DAX file is
> extremely fragmented and requires a lot of 'struct pages' to keep track of
> when stiching up the VMA.

Right, but why does the libnvdimm core need to know about this
specific Xen reservation?  For example, if Xen wants some in-kernel
driver to own a pmem region and place its own metadata on the device I
would recommend something like:

bdev = blkdev_get_by_path("/dev/pmemX",  FMODE_EXCL...);
bdev_direct_access(bdev, ...);

...in other words, I don't think we want libnvdimm to grow new device
types for every possible in-kernel user, Xen, MD, DM, etc. Instead,
just claim the resulting device.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen

2016-10-11 Thread Dan Williams
On Tue, Oct 11, 2016 at 6:08 AM, Jan Beulich  wrote:
 Andrew Cooper  10/10/16 6:44 PM >>>
>>On 10/10/16 01:35, Haozhong Zhang wrote:
>>> Xen hypervisor needs assistance from Dom0 Linux kernel for following tasks:
>>> 1) Reserve an area on NVDIMM devices for Xen hypervisor to place
>>>memory management data structures, i.e. frame table and M2P table.
>>> 2) Report SPA ranges of NVDIMM devices and the reserved area to Xen
>>>hypervisor.
>>
>>However, I can't see any justification for 1).  Dom0 should not be
>>involved in Xen's management of its own frame table and m2p.  The mfns
>>making up the pmem/pblk regions should be treated just like any other
>>MMIO regions, and be handed wholesale to dom0 by default.
>
> That precludes the use as RAM extension, and I thought earlier rounds of
> discussion had got everyone in agreement that at least for the pmem case
> we will need some control data in Xen.

The missing piece for me is why this reservation for control data
needs to be done in the libnvdimm core?  I would expect that any dax
capable file could be mapped and made available to a guest.  This
includes /dev/ramX devices that are dax capable, but are external to
the libnvdimm sub-system.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen

2016-10-10 Thread Dan Williams
On Sun, Oct 9, 2016 at 11:32 PM, Haozhong Zhang
<haozhong.zh...@intel.com> wrote:
> On 10/09/16 20:45, Dan Williams wrote:
>> On Sun, Oct 9, 2016 at 5:35 PM, Haozhong Zhang <haozhong.zh...@intel.com> 
>> wrote:
>> > Overview
>> > 
>> > This RFC kernel patch series along with corresponding patch series of
>> > Xen, QEMU and ndctl implements Xen vNVDIMM, which can map the host
>> > NVDIMM devices to Xen HVM domU as vNVDIMM devices.
>> >
>> > Xen hypervisor does not include an NVDIMM driver, so it needs the
>> > assistance from the driver in Dom0 Linux kernel to manage NVDIMM
>> > devices. We currently only supports NVDIMM devices in pmem mode.
>> >
>> > Design and Implementation
>> > =
>> > The complete design can be found at
>> >   
>> > https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg01921.html.
>>
>> The KVM enabling for persistent memory does not need this support from
>> the kernel, and as far as I can see neither does Xen. If the
>> hypervisor needs to reserve some space it can simply trim the amount
>> that it hands to the guest.
>>
>
> Xen does not have the NVDIMM driver, so it cannot operate on NVDIMM
> devices by itself. Instead it relies on the driver in Dom0 Linux to
> probe NVDIMM and make the reservation.

I'm missing something because the design document talks about mmap'ing
files on a DAX filesystem.  So, I'm assuming it is similar to the KVM
NVDIMM virtualization case where an mmap range in dom0 is translated
into a guest physical range.  The suggestion is to reserve some memory
out of that mapping rather than introduce a new info block /
reservation type to the sub-system.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen

2016-10-09 Thread Dan Williams
On Sun, Oct 9, 2016 at 5:35 PM, Haozhong Zhang  wrote:
> Overview
> 
> This RFC kernel patch series along with corresponding patch series of
> Xen, QEMU and ndctl implements Xen vNVDIMM, which can map the host
> NVDIMM devices to Xen HVM domU as vNVDIMM devices.
>
> Xen hypervisor does not include an NVDIMM driver, so it needs the
> assistance from the driver in Dom0 Linux kernel to manage NVDIMM
> devices. We currently only supports NVDIMM devices in pmem mode.
>
> Design and Implementation
> =
> The complete design can be found at
>   https://lists.xenproject.org/archives/html/xen-devel/2016-07/msg01921.html.

The KVM enabling for persistent memory does not need this support from
the kernel, and as far as I can see neither does Xen. If the
hypervisor needs to reserve some space it can simply trim the amount
that it hands to the guest.

The usage of fiemap and the sysfs resource for the pmem device, as
mentioned in the design document, does not seem to comprehend that
file block allocations may be discontiguous and may change over time
depending on the file.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [arch] WARNING: CPU: 0 PID: 0 at kernel/memremap.c:31 memremap()

2015-07-22 Thread Dan Williams
[ note: this patch is in a dev branch for test coverage, safe to
disregard for now ]

On Wed, Jul 22, 2015 at 4:32 PM, kernel test robot
fengguang...@intel.com wrote:
 Greetings,

 0day kernel testing robot got the below dmesg and the first bad commit is

 git://git.kernel.org/pub/scm/linux/kernel/git/djbw/nvdimm.git pmem-api

 commit 163f9409a57082aed03fbeeb321fbf18bdaf5f42
 Author: Dan Williams dan.j.willi...@intel.com
 AuthorDate: Wed Jul 22 18:09:01 2015 -0400
 Commit: Dan Williams dan.j.willi...@intel.com
 CommitDate: Wed Jul 22 18:09:01 2015 -0400

 arch: introduce memremap(), replace ioremap_cache()

 Existing users of ioremap_cache() are mapping memory that is known in
 advance to not have i/o side effects.  These users are forced to cast
 away the __iomem annotation, or otherwise neglect to fix the sparse
 errors thrown when dereferencing pointers to this memory.  Provide
 memremap() as a non __iomem annotated ioremap_*() in the case when
 ioremap is otherwise a pointer to memory.

 The ARCH_HAS_MEMREMAP kconfig symbol is introduced for archs to assert
 that it is safe to recast / reuse the return value from ioremap as a
 normal pointer to memory.  In other words, archs that mandate specific
 accessors for __iomem are not memremap() capable and drivers that care,
 like pmem, can add a dependency to disable themselves on these archs.

 Note, that memremap is a break from the ioremap implementation pattern
 of adding a new memremap_type() for each mapping type.  Instead,
 the implementation defines flags that are passed to the central
 memremap() implementation.

 Outside of ioremap() and ioremap_nocache(), the expectation is that most
 calls to ioremap_type() are seeking memory-like semantics (e.g.
 speculative reads, and prefetching permitted).  These callsites can be
 moved to memremap() over time.

 Cc: Arnd Bergmann a...@arndb.de
 Acked-by: Andy Shevchenko andy.shevche...@gmail.com
 Signed-off-by: Dan Williams dan.j.willi...@intel.com

 +++++
 || dc5d38e432 | 163f9409a5 | 
 5dfe2d5864 |
 +++++
 | boot_successes | 63 | 0  | 
 0  |
 | boot_failures  | 0  | 26 | 
 19 |
 | WARNING:at_kernel/memremap.c:#memremap()   | 0  | 26 | 
 19 |
 | backtrace:acpi_load_tables | 0  | 26 | 
 19 |
 | backtrace:acpi_early_init  | 0  | 26 | 
 19 |
 | IP-Config:Auto-configuration_of_network_failed | 0  | 0  | 
 2  |
 +++++

 [0.227312] ACPI: Core revision 20150619
 [0.227690] memremap: acpi_os_map_iomem
 [0.228021] [ cut here ]
 [0.228406] WARNING: CPU: 0 PID: 0 at kernel/memremap.c:31 
 memremap+0x73/0x159()
 [0.229202] memremap attempted on unknown/mixed range 0x0ffe 
 size: 4096
 [0.229829] Modules linked in:
 [0.230106] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 
 4.2.0-rc3-7-g163f940 #1
 [0.230718]    c175bf04 c1489068 c175bf30 c175bf20 
 c103a1ae 001f
 [0.231456]  c10f6407 1000 0001 0001 c175bf38 c103a1f0 
 0009 c175bf30
 [0.232242]  c16b3ecb c175bf4c c175bf68 c10f6407 c16b3f04 001f 
 c16b3ecb c175bf54
 [0.232985] Call Trace:
 [0.233196]  [c1489068] dump_stack+0x48/0x60
 [0.233570]  [c103a1ae] warn_slowpath_common+0x89/0xa0
 [0.234016]  [c10f6407] ? memremap+0x73/0x159
 [0.234396]  [c103a1f0] warn_slowpath_fmt+0x2b/0x2f
 [0.234816]  [c10f6407] memremap+0x73/0x159
 [0.235190]  [c1486181] acpi_os_map_iomem+0x10b/0x15f
 [0.235660]  [c14861e2] acpi_os_map_memory+0xd/0xf
 [0.236089]  [c1293d5c] acpi_tb_acquire_table+0x35/0x5a
 [0.236538]  [c1293e41] acpi_tb_validate_table+0x22/0x37
 [0.237001]  [c1841255] acpi_load_tables+0x38/0x146
 [0.237424]  [c184080e] acpi_early_init+0x64/0xd4
 [0.237830]  [c1817b14] start_kernel+0x3e1/0x447
 [0.238232]  [c18172bb] i386_start_kernel+0x85/0x89
 [0.238734] ---[ end trace cb88537fdc8fa200 ]---
 [0.239133] ACPI Exception: AE_NO_ACPI_TABLES, While loading namespace 
 from ACPI tables (20150619/tbxfload-80)

 git bisect start 5dfe2d5864e91244e7befaa2317519ea15dc9c89 
 52721d9d3334c1cb1f76219a161084094ec634dc --
 git bisect good cfc652fabeee43adf800889a5a4a935a9af090a7  # 07:04 20+ 
  0  arch, drivers: don't include asm/io.h directly, use linux/io.h instead
 git bisect good dc5d38e432ff171125f746d80a037692feb16fc9  # 07:11 22+ 
  0  intel_iommu: fix