>>> On 22.04.16 at 14:54, wrote:
> On 04/22/16 06:36, Jan Beulich wrote:
>> >>> On 22.04.16 at 14:26, wrote:
>> > On 04/22/16 04:53, Jan Beulich wrote:
>> >> Perhaps I have got confused by the back and forth. If we're to
>> >> use struct
>>> On 22.04.16 at 14:26, wrote:
> On 04/22/16 04:53, Jan Beulich wrote:
>> Perhaps I have got confused by the back and forth. If we're to
>> use struct page_info, then everything should be following a
>> similar flow to what happens for normal RAM, i.e. normal page
>>
On 04/22/16 04:53, Jan Beulich wrote:
> >>> On 22.04.16 at 12:16, wrote:
> > On 04/22/16 02:24, Jan Beulich wrote:
> > [..]
> >> >> >> Well, using existing range struct to manage guest access permissions
> >> >> >> to nvdimm could consume too much space which could not
>>> On 22.04.16 at 12:16, wrote:
> On 04/22/16 02:24, Jan Beulich wrote:
> [..]
>> >> >> Well, using existing range struct to manage guest access permissions
>> >> >> to nvdimm could consume too much space which could not fit in either
>> >> >> memory or nvdimm. If the
On 04/22/16 02:24, Jan Beulich wrote:
[..]
> >> >> Well, using existing range struct to manage guest access permissions
> >> >> to nvdimm could consume too much space which could not fit in either
> >> >> memory or nvdimm. If the above solution looks really error-prone,
> >> >> perhaps we can
>>> On 22.04.16 at 04:36, wrote:
> On 04/21/16 01:04, Jan Beulich wrote:
>> >>> On 21.04.16 at 07:09, wrote:
>> > On 04/12/16 16:45, Haozhong Zhang wrote:
>> >> On 04/08/16 09:52, Jan Beulich wrote:
>> >> > >>> On 08.04.16 at 07:02,
On 04/21/16 01:04, Jan Beulich wrote:
> >>> On 21.04.16 at 07:09, wrote:
> > On 04/12/16 16:45, Haozhong Zhang wrote:
> >> On 04/08/16 09:52, Jan Beulich wrote:
> >> > >>> On 08.04.16 at 07:02, wrote:
> >> > > On 03/29/16 04:49, Jan Beulich
>>> On 21.04.16 at 07:09, wrote:
> On 04/12/16 16:45, Haozhong Zhang wrote:
>> On 04/08/16 09:52, Jan Beulich wrote:
>> > >>> On 08.04.16 at 07:02, wrote:
>> > > On 03/29/16 04:49, Jan Beulich wrote:
>> > >> >>> On 29.03.16 at 12:10,
On 04/12/16 16:45, Haozhong Zhang wrote:
> On 04/08/16 09:52, Jan Beulich wrote:
> > >>> On 08.04.16 at 07:02, wrote:
> > > On 03/29/16 04:49, Jan Beulich wrote:
> > >> >>> On 29.03.16 at 12:10, wrote:
> > >> > On 03/29/16 03:11, Jan Beulich
On 04/08/16 09:52, Jan Beulich wrote:
> >>> On 08.04.16 at 07:02, wrote:
> > On 03/29/16 04:49, Jan Beulich wrote:
> >> >>> On 29.03.16 at 12:10, wrote:
> >> > On 03/29/16 03:11, Jan Beulich wrote:
> >> >> >>> On 29.03.16 at 10:47,
>>> On 08.04.16 at 07:02, wrote:
> On 03/29/16 04:49, Jan Beulich wrote:
>> >>> On 29.03.16 at 12:10, wrote:
>> > On 03/29/16 03:11, Jan Beulich wrote:
>> >> >>> On 29.03.16 at 10:47, wrote:
> [..]
>> >> > I still
On 03/29/16 04:49, Jan Beulich wrote:
> >>> On 29.03.16 at 12:10, wrote:
> > On 03/29/16 03:11, Jan Beulich wrote:
> >> >>> On 29.03.16 at 10:47, wrote:
[..]
> >> > I still cannot find a neat approach to manage guest permissions for
> >> >
Ian Jackson wrote:
>> >> > Haozhong Zhang writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM
> support for Xen"):
>> >> > > QEMU keeps mappings of guest memory because (1) that mapping is
>> >> > > created by itself, and
On 03/29/16 03:11, Jan Beulich wrote:
> >>> On 29.03.16 at 10:47, <haozhong.zh...@intel.com> wrote:
> > On 03/17/16 22:21, Haozhong Zhang wrote:
> >> On 03/17/16 14:00, Ian Jackson wrote:
> >> > Haozhong Zhang writes ("Re: [Xen-devel] [R
>>> On 29.03.16 at 10:47, <haozhong.zh...@intel.com> wrote:
> On 03/17/16 22:21, Haozhong Zhang wrote:
>> On 03/17/16 14:00, Ian Jackson wrote:
>> > Haozhong Zhang writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM
>> > support for Xen"):
&
On 03/17/16 22:21, Haozhong Zhang wrote:
> On 03/17/16 14:00, Ian Jackson wrote:
> > Haozhong Zhang writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM
> > support for Xen"):
> > > QEMU keeps mappings of guest memory because (1) that mapping is
> > > c
Hi Jan and Konrad,
On 03/04/16 15:30, Haozhong Zhang wrote:
> Suddenly realize it's unnecessary to let QEMU get SPA ranges of NVDIMM
> or files on NVDIMM. We can move that work to toolstack and pass SPA
> ranges got by toolstack to qemu. In this way, no privileged operations
> (mmap/mlock/...)
Jan Beulich writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for
Xen"):
> So that again leaves unaddressed the question of what you
> imply to do when a guest elects to use such a page as page
> table. I'm afraid any attempt of yours to invent something that
> i
>>> On 17.03.16 at 09:58, wrote:
> On 03/16/16 09:23, Jan Beulich wrote:
>> >>> On 16.03.16 at 15:55, wrote:
>> > On 03/16/16 08:23, Jan Beulich wrote:
>> >> >>> On 16.03.16 at 14:55, wrote:
>> >> > On 03/16/16 07:16,
> Then there is another problem (which also exists in the current
> design): does Xen need to emulate NVDIMM _DSM for dom0? Take the _DSM
> that access label storage area (for namespace) for example:
No. And it really can't as each vendors _DSM is different - and there
is no ACPI AML interpreter
>>> On 17.03.16 at 13:44, wrote:
> On 03/17/16 05:04, Jan Beulich wrote:
>> >>> On 17.03.16 at 09:58, wrote:
>> > On 03/16/16 09:23, Jan Beulich wrote:
>> >> >>> On 16.03.16 at 15:55, wrote:
>> >> > On 03/16/16 08:23,
On 03/17/16 14:00, Ian Jackson wrote:
> Haozhong Zhang writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support
> for Xen"):
> > QEMU keeps mappings of guest memory because (1) that mapping is
> > created by itself, and/or (2) certain device emulation needs to
On 03/17/16 22:12, Xu, Quan wrote:
> On March 17, 2016 9:37pm, Haozhong Zhang wrote:
> > For PV guests (if we add vNVDIMM support for them in future), as I'm going
> > to
> > use page_info struct for it, I suppose the current mechanism in Xen can
> > handle
> > this
On 03/16/16 08:23, Jan Beulich wrote:
> >>> On 16.03.16 at 14:55, wrote:
> > On 03/16/16 07:16, Jan Beulich wrote:
> >> Which reminds me: When considering a file on NVDIMM, how
> >> are you making sure the mapping of the file to disk (i.e.
> >> memory) blocks doesn't
On Wed, Mar 16, 2016 at 08:55:08PM +0800, Haozhong Zhang wrote:
> Hi Jan and Konrad,
>
> On 03/04/16 15:30, Haozhong Zhang wrote:
> > Suddenly realize it's unnecessary to let QEMU get SPA ranges of NVDIMM
> > or files on NVDIMM. We can move that work to toolstack and pass SPA
> > ranges got by
>>> On 17.03.16 at 14:29, wrote:
> On 03/17/16 06:59, Jan Beulich wrote:
>> >>> On 17.03.16 at 13:44, wrote:
>> > Hmm, making Xen has full control could at least make reserving space
>> > on NVDIMM easier. I guess full control does not include
On 03/16/16 07:16, Jan Beulich wrote:
> >>> On 16.03.16 at 13:55, wrote:
> > Hi Jan and Konrad,
> >
> > On 03/04/16 15:30, Haozhong Zhang wrote:
> >> Suddenly realize it's unnecessary to let QEMU get SPA ranges of NVDIMM
> >> or files on NVDIMM. We can move that work to
>>> On 16.03.16 at 13:55, wrote:
> Hi Jan and Konrad,
>
> On 03/04/16 15:30, Haozhong Zhang wrote:
>> Suddenly realize it's unnecessary to let QEMU get SPA ranges of NVDIMM
>> or files on NVDIMM. We can move that work to toolstack and pass SPA
>> ranges got by toolstack
On 03/16/16 09:23, Jan Beulich wrote:
> >>> On 16.03.16 at 15:55, wrote:
> > On 03/16/16 08:23, Jan Beulich wrote:
> >> >>> On 16.03.16 at 14:55, wrote:
> >> > On 03/16/16 07:16, Jan Beulich wrote:
> >> >> Which reminds me: When considering a
On 03/17/16 06:59, Jan Beulich wrote:
> >>> On 17.03.16 at 13:44, wrote:
> > On 03/17/16 05:04, Jan Beulich wrote:
> >> >>> On 17.03.16 at 09:58, wrote:
> >> > On 03/16/16 09:23, Jan Beulich wrote:
> >> >> >>> On 16.03.16 at 15:55,
On 03/17/16 11:05, Ian Jackson wrote:
> Jan Beulich writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for
> Xen"):
> > So that again leaves unaddressed the question of what you
> > imply to do when a guest elects to use such a page as page
> > tabl
>>> On 16.03.16 at 14:55, wrote:
> On 03/16/16 07:16, Jan Beulich wrote:
>> Which reminds me: When considering a file on NVDIMM, how
>> are you making sure the mapping of the file to disk (i.e.
>> memory) blocks doesn't change while the guest has access
>> to it, e.g.
On March 17, 2016 9:37pm, Haozhong Zhang wrote:
> For PV guests (if we add vNVDIMM support for them in future), as I'm going to
> use page_info struct for it, I suppose the current mechanism in Xen can handle
> this case. I'm not familiar with PV memory management
The
>>> On 16.03.16 at 15:55, wrote:
> On 03/16/16 08:23, Jan Beulich wrote:
>> >>> On 16.03.16 at 14:55, wrote:
>> > On 03/16/16 07:16, Jan Beulich wrote:
>> >> Which reminds me: When considering a file on NVDIMM, how
>> >> are you making sure the
>>> On 17.03.16 at 14:37, <haozhong.zh...@intel.com> wrote:
> On 03/17/16 11:05, Ian Jackson wrote:
>> Jan Beulich writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support
>> for
> Xen"):
>> > So that again leaves unaddressed the quest
On 03/17/16 07:56, Jan Beulich wrote:
> >>> On 17.03.16 at 14:37, <haozhong.zh...@intel.com> wrote:
> > On 03/17/16 11:05, Ian Jackson wrote:
> >> Jan Beulich writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support
> >> for
> > Xen&qu
On 03/17/16 05:04, Jan Beulich wrote:
> >>> On 17.03.16 at 09:58, wrote:
> > On 03/16/16 09:23, Jan Beulich wrote:
> >> >>> On 16.03.16 at 15:55, wrote:
> >> > On 03/16/16 08:23, Jan Beulich wrote:
> >> >> >>> On 16.03.16 at 14:55,
Haozhong Zhang writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support
for Xen"):
> QEMU keeps mappings of guest memory because (1) that mapping is
> created by itself, and/or (2) certain device emulation needs to access
> the guest memory. But for vNVDIMM, I'm going t
On 03/09/16 09:17, Jan Beulich wrote:
> >>> On 09.03.16 at 13:22, wrote:
> > On 03/08/16 02:27, Jan Beulich wrote:
> >> >>> On 08.03.16 at 10:15, wrote:
[...]
> > I should reexplain the choice of data structures and where to put them.
> >
> >
>>> On 09.03.16 at 13:22, wrote:
> On 03/08/16 02:27, Jan Beulich wrote:
>> >>> On 08.03.16 at 10:15, wrote:
>> > More thoughts on reserving NVDIMM space for per-page structures
>> >
>> > Currently, a per-page struct for managing mapping of
On 03/08/16 02:27, Jan Beulich wrote:
> >>> On 08.03.16 at 10:15, wrote:
> > More thoughts on reserving NVDIMM space for per-page structures
> >
> > Currently, a per-page struct for managing mapping of NVDIMM pages may
> > include following fields:
> >
> > struct
>>> On 08.03.16 at 10:15, wrote:
> More thoughts on reserving NVDIMM space for per-page structures
>
> Currently, a per-page struct for managing mapping of NVDIMM pages may
> include following fields:
>
> struct nvdimm_page
> {
> uint64_t mfn;/* MFN of SPA
On 03/04/16 10:20, Haozhong Zhang wrote:
> On 03/02/16 06:03, Jan Beulich wrote:
> > >>> On 02.03.16 at 08:14, wrote:
> > > It means NVDIMM is very possibly mapped in page granularity, and
> > > hypervisor needs per-page data structures like page_info (rather than the
>
On 03/07/16 15:53, Konrad Rzeszutek Wilk wrote:
> On Wed, Mar 02, 2016 at 03:14:52PM +0800, Haozhong Zhang wrote:
> > On 03/01/16 13:49, Konrad Rzeszutek Wilk wrote:
> > > On Tue, Mar 01, 2016 at 06:33:32PM +, Ian Jackson wrote:
> > > > Haozhong Zhang writes (&q
On Wed, Mar 02, 2016 at 03:14:52PM +0800, Haozhong Zhang wrote:
> On 03/01/16 13:49, Konrad Rzeszutek Wilk wrote:
> > On Tue, Mar 01, 2016 at 06:33:32PM +, Ian Jackson wrote:
> > > Haozhong Zhang writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM
> > > sup
On 02/16/16 05:55, Jan Beulich wrote:
> >>> On 16.02.16 at 12:14, wrote:
> > On Mon, 15 Feb 2016, Zhang, Haozhong wrote:
> >> On 02/04/16 20:24, Stefano Stabellini wrote:
> >> > On Thu, 4 Feb 2016, Haozhong Zhang wrote:
> >> > > On 02/03/16 15:22, Stefano
On 03/02/16 06:03, Jan Beulich wrote:
> >>> On 02.03.16 at 08:14, wrote:
> > It means NVDIMM is very possibly mapped in page granularity, and
> > hypervisor needs per-page data structures like page_info (rather than the
> > range set style nvdimm_pages) to manage those
>>> On 02.03.16 at 08:14, wrote:
> It means NVDIMM is very possibly mapped in page granularity, and
> hypervisor needs per-page data structures like page_info (rather than the
> range set style nvdimm_pages) to manage those mappings.
>
> Then we will face the problem
On 03/01/16 13:49, Konrad Rzeszutek Wilk wrote:
> On Tue, Mar 01, 2016 at 06:33:32PM +, Ian Jackson wrote:
> > Haozhong Zhang writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM
> > support for Xen"):
> > > On 02/18/16 21:14, Konrad Rzeszutek Wilk wrot
On Tue, Mar 01, 2016 at 06:33:32PM +, Ian Jackson wrote:
> Haozhong Zhang writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support
> for Xen"):
> > On 02/18/16 21:14, Konrad Rzeszutek Wilk wrote:
> > > [someone:]
> > > > (2) For XENM
Haozhong Zhang writes ("Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support
for Xen"):
> On 02/18/16 21:14, Konrad Rzeszutek Wilk wrote:
> > [someone:]
> > > (2) For XENMAPSPACE_gmfn, _gmfn_range and _gmfn_foreign,
> > >(a) never map idx in them to GFNs
>>> On 01.03.16 at 14:51, wrote:
> Haozhong Zhang writes ("Re: [RFC Design Doc] Add vNVDIMM support for Xen"):
>> On 02/29/16 05:04, Jan Beulich wrote:
>> > Which will involve adding how much new code to it?
>>
>> Because hvmloader only accepts AML device rather than
Haozhong Zhang writes ("Re: [RFC Design Doc] Add vNVDIMM support for Xen"):
> On 02/29/16 05:04, Jan Beulich wrote:
> > Which will involve adding how much new code to it?
>
> Because hvmloader only accepts AML device rather than arbitrary objects,
> only code that builds the outmost part of AML
On 02/18/16 21:14, Konrad Rzeszutek Wilk wrote:
> > > > QEMU would always use MFN above guest normal ram and I/O holes for
> > > > vNVDIMM. It would attempt to search in that space for a contiguous range
> > > > that is large enough for that that vNVDIMM devices. Is guest able to
> > > > punch
On 02/29/16 05:04, Jan Beulich wrote:
> >>> On 29.02.16 at 12:52, wrote:
> > On 02/29/16 03:12, Jan Beulich wrote:
> >> >>> On 29.02.16 at 10:45, wrote:
> >> > On 02/29/16 02:01, Jan Beulich wrote:
> >> >> >>> On 28.02.16 at 15:48,
>>> On 29.02.16 at 12:52, wrote:
> On 02/29/16 03:12, Jan Beulich wrote:
>> >>> On 29.02.16 at 10:45, wrote:
>> > On 02/29/16 02:01, Jan Beulich wrote:
>> >> >>> On 28.02.16 at 15:48, wrote:
>> >> > Anyway, we may
On 02/29/16 03:12, Jan Beulich wrote:
> >>> On 29.02.16 at 10:45, wrote:
> > On 02/29/16 02:01, Jan Beulich wrote:
> >> >>> On 28.02.16 at 15:48, wrote:
> >> > Anyway, we may avoid some conflicts between ACPI tables/objects by
> >> >
>>> On 29.02.16 at 10:45, wrote:
> On 02/29/16 02:01, Jan Beulich wrote:
>> >>> On 28.02.16 at 15:48, wrote:
>> > Anyway, we may avoid some conflicts between ACPI tables/objects by
>> > restricting which tables and objects can be passed from
On 02/29/16 02:01, Jan Beulich wrote:
> >>> On 28.02.16 at 15:48, wrote:
> > On 02/24/16 09:54, Jan Beulich wrote:
> >> >>> On 24.02.16 at 16:48, wrote:
> >> > On 02/24/16 07:24, Jan Beulich wrote:
> >> >> >>> On 24.02.16 at 14:28,
>>> On 28.02.16 at 15:48, wrote:
> On 02/24/16 09:54, Jan Beulich wrote:
>> >>> On 24.02.16 at 16:48, wrote:
>> > On 02/24/16 07:24, Jan Beulich wrote:
>> >> >>> On 24.02.16 at 14:28, wrote:
>> >> > On 02/18/16 10:17,
On 02/24/16 09:54, Jan Beulich wrote:
> >>> On 24.02.16 at 16:48, wrote:
> > On 02/24/16 07:24, Jan Beulich wrote:
> >> >>> On 24.02.16 at 14:28, wrote:
> >> > On 02/18/16 10:17, Jan Beulich wrote:
> >> >> >>> On 01.02.16 at 06:44,
> > > QEMU would always use MFN above guest normal ram and I/O holes for
> > > vNVDIMM. It would attempt to search in that space for a contiguous range
> > > that is large enough for that that vNVDIMM devices. Is guest able to
> > > punch holes in such GFN space?
> >
> > See XENMAPSPACE_* and
>>> On 01.02.16 at 06:44, wrote:
> This design treats host NVDIMM devices as ordinary MMIO devices:
Wrt the cachability note earlier on, I assume you're aware that with
the XSA-154 changes we disallow any cachable mappings of MMIO
by default.
> (1) Dom0 Linux NVDIMM
On 02/17/16 02:08, Jan Beulich wrote:
> >>> On 17.02.16 at 10:01, wrote:
> > On 02/15/16 04:07, Jan Beulich wrote:
> >> >>> On 15.02.16 at 09:43, wrote:
> >> > On 02/03/16 03:15, Konrad Rzeszutek Wilk wrote:
> >> >> > Similarly to that in
>>> On 17.02.16 at 10:01, wrote:
> On 02/15/16 04:07, Jan Beulich wrote:
>> >>> On 15.02.16 at 09:43, wrote:
>> > On 02/03/16 03:15, Konrad Rzeszutek Wilk wrote:
>> >> > Similarly to that in KVM/QEMU, enabling vNVDIMM in Xen is composed of
>>
On 02/16/16 05:55, Jan Beulich wrote:
> >>> On 16.02.16 at 12:14, wrote:
> > On Mon, 15 Feb 2016, Zhang, Haozhong wrote:
> >> On 02/04/16 20:24, Stefano Stabellini wrote:
> >> > On Thu, 4 Feb 2016, Haozhong Zhang wrote:
> >> > > On 02/03/16 15:22, Stefano
On 02/15/16 04:07, Jan Beulich wrote:
> >>> On 15.02.16 at 09:43, wrote:
> > On 02/03/16 03:15, Konrad Rzeszutek Wilk wrote:
> >> > Similarly to that in KVM/QEMU, enabling vNVDIMM in Xen is composed of
> >> > three parts:
> >> > (1) Guest clwb/clflushopt/pcommit
>>> On 16.02.16 at 12:14, wrote:
> On Mon, 15 Feb 2016, Zhang, Haozhong wrote:
>> On 02/04/16 20:24, Stefano Stabellini wrote:
>> > On Thu, 4 Feb 2016, Haozhong Zhang wrote:
>> > > On 02/03/16 15:22, Stefano Stabellini wrote:
>> > > > On Wed, 3 Feb 2016, George
On Mon, 15 Feb 2016, Zhang, Haozhong wrote:
> On 02/04/16 20:24, Stefano Stabellini wrote:
> > On Thu, 4 Feb 2016, Haozhong Zhang wrote:
> > > On 02/03/16 15:22, Stefano Stabellini wrote:
> > > > On Wed, 3 Feb 2016, George Dunlap wrote:
> > > > > On 03/02/16 12:02, Stefano Stabellini wrote:
> > >
>>> On 15.02.16 at 09:43, wrote:
> On 02/03/16 03:15, Konrad Rzeszutek Wilk wrote:
>> > Similarly to that in KVM/QEMU, enabling vNVDIMM in Xen is composed of
>> > three parts:
>> > (1) Guest clwb/clflushopt/pcommit enabling,
>> > (2) Memory mapping, and
>> > (3)
On 02/03/16 23:47, Konrad Rzeszutek Wilk wrote:
> > > > > Open: It seems no system call/ioctl is provided by Linux kernel to
> > > > >get the physical address from a virtual address.
> > > > >/proc//pagemap provides information of mapping from
> > > > >VA to PA. Is it an
On 02/03/16 03:15, Konrad Rzeszutek Wilk wrote:
> > 3. Design of vNVDIMM in Xen
>
> Thank you for this design!
>
> >
> > Similarly to that in KVM/QEMU, enabling vNVDIMM in Xen is composed of
> > three parts:
> > (1) Guest clwb/clflushopt/pcommit enabling,
> > (2) Memory mapping, and
> >
On 02/04/16 20:24, Stefano Stabellini wrote:
> On Thu, 4 Feb 2016, Haozhong Zhang wrote:
> > On 02/03/16 15:22, Stefano Stabellini wrote:
> > > On Wed, 3 Feb 2016, George Dunlap wrote:
> > > > On 03/02/16 12:02, Stefano Stabellini wrote:
> > > > > On Wed, 3 Feb 2016, Haozhong Zhang wrote:
> > > >
On 02/05/2016 08:43 PM, Haozhong Zhang wrote:
> On 02/05/16 09:40, Ross Philipson wrote:
>> On 02/03/2016 09:09 AM, Andrew Cooper wrote:
> [...]
>>> I agree.
>>>
>>> There has to be a single entity responsible for collating the eventual
>>> ACPI handed to the guest, and this is definitely
On 02/03/2016 09:09 AM, Andrew Cooper wrote:
On 03/02/16 09:13, Jan Beulich wrote:
On 03.02.16 at 08:00, wrote:
On 02/02/16 17:11, Stefano Stabellini wrote:
Once upon a time somebody made the decision that ACPI tables
on Xen should be static and included in
On 02/05/16 09:40, Ross Philipson wrote:
> On 02/03/2016 09:09 AM, Andrew Cooper wrote:
[...]
> >I agree.
> >
> >There has to be a single entity responsible for collating the eventual
> >ACPI handed to the guest, and this is definitely HVMLoader.
> >
> >However, it is correct that Qemu create the
On Thu, 4 Feb 2016, Haozhong Zhang wrote:
> On 02/03/16 15:22, Stefano Stabellini wrote:
> > On Wed, 3 Feb 2016, George Dunlap wrote:
> > > On 03/02/16 12:02, Stefano Stabellini wrote:
> > > > On Wed, 3 Feb 2016, Haozhong Zhang wrote:
> > > >> Or, we can make a file system on /dev/pmem0, create
>>> On 03.02.16 at 08:00, wrote:
> On 02/02/16 17:11, Stefano Stabellini wrote:
>> Once upon a time somebody made the decision that ACPI tables
>> on Xen should be static and included in hvmloader. That might have been
>> a bad decision but at least it was coherent.
>>> On 03.02.16 at 13:22, wrote:
> On 02/03/16 02:18, Jan Beulich wrote:
>> >>> On 03.02.16 at 09:28, wrote:
>> > On 02/02/16 14:15, Konrad Rzeszutek Wilk wrote:
>> >> > 3.1 Guest clwb/clflushopt/pcommit Enabling
>> >> >
>> >> > The
On 02/03/16 12:02, Stefano Stabellini wrote:
> On Wed, 3 Feb 2016, Haozhong Zhang wrote:
> > On 02/02/16 17:11, Stefano Stabellini wrote:
> > > On Mon, 1 Feb 2016, Haozhong Zhang wrote:
[...]
> > > > This design treats host NVDIMM devices as ordinary MMIO devices:
> > > > (1) Dom0 Linux NVDIMM
On 02/03/16 02:18, Jan Beulich wrote:
> >>> On 03.02.16 at 09:28, wrote:
> > On 02/02/16 14:15, Konrad Rzeszutek Wilk wrote:
> >> > 3.1 Guest clwb/clflushopt/pcommit Enabling
> >> >
> >> > The instruction enabling is simple and we do the same work as in
> >> >
On 02/02/16 14:15, Konrad Rzeszutek Wilk wrote:
> > 3. Design of vNVDIMM in Xen
>
> Thank you for this design!
>
> >
> > Similarly to that in KVM/QEMU, enabling vNVDIMM in Xen is composed of
> > three parts:
> > (1) Guest clwb/clflushopt/pcommit enabling,
> > (2) Memory mapping, and
> >
>>> On 03.02.16 at 09:28, wrote:
> On 02/02/16 14:15, Konrad Rzeszutek Wilk wrote:
>> > 3.1 Guest clwb/clflushopt/pcommit Enabling
>> >
>> > The instruction enabling is simple and we do the same work as in KVM/QEMU.
>> > - All three instructions are exposed to guest
On 03/02/16 09:18, Jan Beulich wrote:
>>
>>> In other words - the NVDIMM resource does not provide any resource
>>> isolation. However this may not be any different than what we had
>>> nowadays with CPU caches.
>>>
>> Does Xen have any mechanism to isolate multiple guests' operations on
>> CPU
>>> On 03.02.16 at 15:30, wrote:
> On 03/02/16 09:18, Jan Beulich wrote:
>>>
In other words - the NVDIMM resource does not provide any resource
isolation. However this may not be any different than what we had
nowadays with CPU caches.
>>> Does Xen
On 03/02/16 12:02, Stefano Stabellini wrote:
> On Wed, 3 Feb 2016, Haozhong Zhang wrote:
>> Or, we can make a file system on /dev/pmem0, create files on it, set
>> the owner of those files to xen-qemuuser-domid$domid, and then pass
>> those files to QEMU. In this way, non-root QEMU should be able
On Wed, 3 Feb 2016, George Dunlap wrote:
> On 03/02/16 12:02, Stefano Stabellini wrote:
> > On Wed, 3 Feb 2016, Haozhong Zhang wrote:
> >> Or, we can make a file system on /dev/pmem0, create files on it, set
> >> the owner of those files to xen-qemuuser-domid$domid, and then pass
> >> those files
On 03/02/16 13:11, Haozhong Zhang wrote:
> On 02/03/16 12:02, Stefano Stabellini wrote:
>> On Wed, 3 Feb 2016, Haozhong Zhang wrote:
>>> On 02/02/16 17:11, Stefano Stabellini wrote:
On Mon, 1 Feb 2016, Haozhong Zhang wrote:
> [...]
> This design treats host NVDIMM devices as ordinary
On 03/02/16 15:22, Stefano Stabellini wrote:
> On Wed, 3 Feb 2016, George Dunlap wrote:
>> On 03/02/16 12:02, Stefano Stabellini wrote:
>>> On Wed, 3 Feb 2016, Haozhong Zhang wrote:
Or, we can make a file system on /dev/pmem0, create files on it, set
the owner of those files to
On 03/02/16 09:13, Jan Beulich wrote:
On 03.02.16 at 08:00, wrote:
>> On 02/02/16 17:11, Stefano Stabellini wrote:
>>> Once upon a time somebody made the decision that ACPI tables
>>> on Xen should be static and included in hvmloader. That might have been
>>> a bad
On 02/03/16 14:09, Andrew Cooper wrote:
> On 03/02/16 09:13, Jan Beulich wrote:
> On 03.02.16 at 08:00, wrote:
> >> On 02/02/16 17:11, Stefano Stabellini wrote:
> >>> Once upon a time somebody made the decision that ACPI tables
> >>> on Xen should be static and
On Wed, Feb 03, 2016 at 03:22:59PM +, Stefano Stabellini wrote:
> On Wed, 3 Feb 2016, George Dunlap wrote:
> > On 03/02/16 12:02, Stefano Stabellini wrote:
> > > On Wed, 3 Feb 2016, Haozhong Zhang wrote:
> > >> Or, we can make a file system on /dev/pmem0, create files on it, set
> > >> the
On 02/03/16 10:47, Konrad Rzeszutek Wilk wrote:
> > > > > Open: It seems no system call/ioctl is provided by Linux kernel to
> > > > >get the physical address from a virtual address.
> > > > >/proc//pagemap provides information of mapping from
> > > > >VA to PA. Is it an
On 02/03/16 14:20, Andrew Cooper wrote:
> > (ACPI part is described in Section 3.3 later)
> >
> > Above (1)(2) have already been done in current QEMU. Only (3) is
> > needed to implement in QEMU. No change is needed in Xen for address
> > mapping in this design.
> >
>
On 02/03/16 15:22, Stefano Stabellini wrote:
> On Wed, 3 Feb 2016, George Dunlap wrote:
> > On 03/02/16 12:02, Stefano Stabellini wrote:
> > > On Wed, 3 Feb 2016, Haozhong Zhang wrote:
> > >> Or, we can make a file system on /dev/pmem0, create files on it, set
> > >> the owner of those files to
> > > > Open: It seems no system call/ioctl is provided by Linux kernel to
> > > >get the physical address from a virtual address.
> > > >/proc//pagemap provides information of mapping from
> > > >VA to PA. Is it an acceptable solution to let QEMU parse this
> > > >
On 02/03/16 05:38, Jan Beulich wrote:
> >>> On 03.02.16 at 13:22, wrote:
> > On 02/03/16 02:18, Jan Beulich wrote:
> >> >>> On 03.02.16 at 09:28, wrote:
> >> > On 02/02/16 14:15, Konrad Rzeszutek Wilk wrote:
> >> >> > 3.1 Guest
> > 2.2 vNVDIMM Implementation in KVM/QEMU
> >
> > (1) Address Mapping
> >
> > As described before, the host Linux NVDIMM driver provides a block
> > device interface (/dev/pmem0 at the bottom) for a pmem NVDIMM
> > region. QEMU can than mmap(2) that device into its virtual address
> >
> 3. Design of vNVDIMM in Xen
Thank you for this design!
>
> Similarly to that in KVM/QEMU, enabling vNVDIMM in Xen is composed of
> three parts:
> (1) Guest clwb/clflushopt/pcommit enabling,
> (2) Memory mapping, and
> (3) Guest ACPI emulation.
.. MCE? and vMCE?
>
> The rest of this
> From: Zhang, Haozhong
> Sent: Tuesday, February 02, 2016 3:53 PM
>
> On 02/02/16 15:48, Tian, Kevin wrote:
> > > From: Zhang, Haozhong
> > > Sent: Tuesday, February 02, 2016 3:39 PM
> > >
> > > > btw, how is persistency guaranteed in KVM/QEMU, cross guest
> > > > power off/on? I guess since
1 - 100 of 112 matches
Mail list logo