On Sun, Oct 9, 2016 at 5:35 PM, Haozhong Zhang wrote:
> Overview
>
> This RFC kernel patch series along with corresponding patch series of
> Xen, QEMU and ndctl implements Xen vNVDIMM, which can map the host
> NVDIMM devices to Xen HVM domU as vNVDIMM devices.
>
Hello everyone.
When I use gdb to trace qemu-xen-traditional, I have a little doubt
about host_alarm_handler. I already understand that it can implement
a dynamic tick, through tickless to save CPU cost. However, I found
that the following code has never run when I use qemu-xen-traditional
with
Add support to create the Xen mode namespace which turns the underlying
pfn device into PFN_MODE_XEN.
Signed-off-by: Haozhong Zhang
---
ndctl/builtin-xaction-namespace.c | 7 ++-
ndctl/lib/libndctl.c | 6 ++
ndctl/libndctl.h.in | 2 ++
pfn device in PFN_MODE_XEN reserves an area for Xen hypervisor to place
its own pmem management data structures (i.e. frame table and M2P
table). The reserved area is not used and not mapped by Linux kernel,
and only the data area is mapped.
Signed-off-by: Haozhong Zhang
Xen hypervisor does not include NVDIMM driver and relies on the driver
in Dom0 Linux to probe pfn devices in PFN_MODE_XEN. Whenever such a pfn
device is probed, Dom0 Linux reports pages of the entire device, its
reserved area and data area to Xen hypervisor.
Signed-off-by: Haozhong Zhang
Reserve the address space after guest physical memory for the hotplug
memory region which is used by the existing implementation to place
NVDIMM devices.
Signed-off-by: Haozhong Zhang
---
Cc: "Michael S. Tsirkin"
Cc: Igor Mammedov
Build and copy NFIT to guest when QEMU is used as the device model of
Xen. The checksum of NFIT is left blank and will be filled by Xen
hvmloader.
Signed-off-by: Haozhong Zhang
---
Cc: "Michael S. Tsirkin"
Cc: Igor Mammedov
Cc:
Overview
This RFC kernel patch series along with corresponding patch series of
Xen, QEMU and ndctl implements Xen vNVDIMM, which can map the host
NVDIMM devices to Xen HVM domU as vNVDIMM devices.
Xen hypervisor does not include an NVDIMM driver, so it needs the
assistance from the
No fw_cfg is created when QEMU is used as the device model of Xen.
Signed-off-by: Haozhong Zhang
---
Cc: Xiao Guangrong
Cc: "Michael S. Tsirkin"
Cc: Igor Mammedov
---
hw/acpi/nvdimm.c | 7 +--
xen_acpi_copy_to_guest() will be used later to copy NVDIMM ACPI to
guest.
Signed-off-by: Haozhong Zhang
---
Cc: Stefano Stabellini
Cc: Anthony Perard
Cc: xen-de...@lists.xensource.com
---
include/hw/xen/xen.h | 6
Build and copy NVDIMM namespace devices to guest when QEMU is used as
the device model of Xen. Only the body of each AML device is built and
copied, Xen hvmloader will build the complete namespace devices from
them and put in SSDT tables.
Signed-off-by: Haozhong Zhang
Some virtual devices (e.g. NVDIMM) use the host memory backend to map
its backend resources to the guest. When those devices are used on Xen,
the mapping has to be managed out of QEMU. In order to reuse other parts
of the implementation of those devices, we introduce a host memory
backend for Xen
Overview
This RFC QEMU patch series along with corresponding patch series of
Xen, Linux kernel and ndctl implements vNVDIMM for Xen HVM guests. DSM
(and hence labels) and hotplug are not supported by this patch series
and will be implemented later.
Design and Implementation
Xen uses this command to get the backend resource, guest SPA and size of
NVDIMM devices so as to map them to guest.
Signed-off-by: Haozhong Zhang
---
Cc: Markus Armbruster
Cc: Xiao Guangrong
Cc: "Michael S. Tsirkin"
We can map host pmem devices or files on pmem devices to guests. This
patch adds support to map files on pmem devices. The implementation
relies on the Linux pmem driver and kernel APIs, so it currently
functions only when libxl is compiled for Linux.
Signed-off-by: Haozhong Zhang
One guest page is reserved for the device model to place guest ACPI. The
base address and size of the reserved area are passed to the device
model via XenStore keys hvmloader/dm-acpi/{address, length}.
Signed-off-by: Haozhong Zhang
---
Cc: Ian Jackson
When memory-backend-xen is used, the label_data pointer can not be got
via memory_region_get_ram_ptr(). We will use other functions to get
label_data once we introduce NVDIMM label support to Xen.
Signed-off-by: Haozhong Zhang
---
Cc: Xiao Guangrong
For xl vNVDIMM configs
vnvdimms = [ '/path/to/pmem0', '/path/to/pmem1', ... ]
the following qemu options are built
-machine ,nvdimm
-m ,slots=$NR_SLOTS,maxmem=$MEM_SIZE
-object memory-backend-xen,id=mem1,size=$PMEM0_SIZE,mem-path=/path/to/pmem0
-device nvdimm,id=nvdimm1,memdev=mem1
libacpi needs to access information placed in XenStore in order to load
ACPI built by the device model.
Signed-off-by: Haozhong Zhang
---
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: Ian Jackson
Cc:
Overview
This RFC Xen patch series along with corresponding patch series of
QEMU, Linux kernel and ndctl implements the basic functionality of
vNVDIMM for HVM domains.
It currently supports to assign host pmem devices or files on host
pmem devices to HVM domains as virtual NVDIMM
A reserved area on each pmem region is used to place the M2P table.
However, it's not at the beginning of the pmem region, so we need to
specify the location explicitly when creating the M2P table.
Signed-off-by: Haozhong Zhang
---
Cc: Jan Beulich
If any error code is returned when creating a domain, stop the domain
creation.
Signed-off-by: Haozhong Zhang
---
Cc: Ian Jackson
Cc: Wei Liu
---
tools/libxl/libxl_create.c | 4 +++-
1 file changed, 3 insertions(+), 1
ACPI tables built by the device model, whose signatures do not
conflict with tables built by Xen (except SSDT), are loaded after ACPI
tables built by Xen.
ACPI namespace devices built by the device model, whose names do not
conflict with devices built by Xen, are assembled and placed in SSDTs
Expose the minimal allocation unit and the minimal alignment used by the
memory allocator, so that certain ACPI code (e.g. the AML builder added
later) can get contiguous memory allocated by multiple calls to
acpi_ctxt.mem_ops.alloc().
Signed-off-by: Haozhong Zhang
---
We can map host pmem devices or files on pmem devices to guests. This
patch adds support to map pmem devices. The implementation relies on the
Linux pmem driver, so it currently functions only when libxl is compiled
for Linux.
Signed-off-by: Haozhong Zhang
---
Cc: Ian
QMP command 'query-nvdimms' is used by libxl to get the backend, the
guest SPA and size of each vNVDIMM device, and then libxl starts mapping
backend to guest for each vNVDIMM device.
Signed-off-by: Haozhong Zhang
---
Cc: Ian Jackson
Cc: Wei
Xen hypervisor does not include a pmem driver. Instead, it relies on the
pmem driver in Dom0 to report the PFN ranges of the entire pmem region,
its reserved area and data area via XENPF_pmem_add. The reserved area is
used by Xen hypervisor to place the frame table and M2P table, and is
disallowed
It is used by libacpi to generate SSDTs from ACPI namespace devices
built by the device model.
Signed-off-by: Haozhong Zhang
---
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: Ian Jackson
Cc: Wei Liu
This callback is used when libacpi needs to in-place access ACPI built
by the device model, whose address is specified in the physical address.
Signed-off-by: Haozhong Zhang
---
Cc: Jan Beulich
Cc: Andrew Cooper
Cc: Ian
A reserved area on each pmem region is used to place the frame table.
However, it's not at the beginning of the pmem region, so we need to
specify the location explicitly when extending the frame table.
Signed-off-by: Haozhong Zhang
---
Cc: Jan Beulich
XENMEM_populate_pmemmap is used by toolstack to map given host pmem pages
to given guest pages. Only pages in the data area of a pmem region are
allowed to be mapped to guest.
Signed-off-by: Haozhong Zhang
---
Cc: Ian Jackson
Cc: Wei Liu
The host pmem pages mapped to a domain are unassigned at domain destroy
so as to be used by other domains in future.
Signed-off-by: Haozhong Zhang
---
Cc: Jan Beulich
Cc: Andrew Cooper
---
xen/arch/x86/domain.c | 5
Hi all,
During development of linux kernel PCI driver with SR-IOV I meet some
difficulty and I wanted to make sure that I understand correctly Xen
concepts.
Things that confuse me is build in function: iommu_present. Is usually
used in driver that are using iommu to work.
My problem is that
On Tue, Oct 4, 2016 at 11:06 AM, Paolo Bonzini wrote:
>
>
> On 04/10/2016 08:43, Emil Condrea wrote:
>> xen_be_frontend_changed -> xen_fe_frontend_changed
>
> This is not correct. The front-end is implemented in the guest domain,
> while the back-end is implemented in the
hey guys I have here a blocker which I can't overcome
maybe so can help
make[7]: Entering directory '/root/xen/tools/firmware/etherboot/ipxe/src'
[BUILD] bin/stringextra.o
core/stringextra.c: In function ‘strtok’:
core/stringextra.c:189:18: error: nonnull argument ‘s’ compared to
On Wed, Sep 28, 2016 at 11:54 PM, Andre Przywara wrote:
> Create a new file to hold the emulation code for the ITS widget.
> For now we emulate the memory mapped ITS registers and provide a stub
> to introduce the ITS command handling framework (but without actually
>
Hi Andre,
On Thunderx, MAPD commands are failing with error 0x1,
which mean DEVID out of range.
On Wed, Sep 28, 2016 at 11:54 PM, Andre Przywara wrote:
> Each ITS maps a pair of a DeviceID (usually the PCI b/d/f triplet) and
> an EventID (the MSI payload or interrupt
This run is configured for baseline tests only.
flight 67853 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/67853/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-xsm 5 xen-build
flight 101342 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101342/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf 6859cc8b72d3c205853dd1030b143439f5b2215a
baseline version:
ovmf
This run is configured for baseline tests only.
flight 67852 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/67852/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf 8f3ecc5e530b5d4432ce5622149bd636a8880fb7
baseline
flight 101343 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101343/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
coverity-amd646 coverity-upload fail REGR. vs. 101279
version
flight 101341 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101341/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf 8f3ecc5e530b5d4432ce5622149bd636a8880fb7
baseline version:
ovmf
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: Wednesday, September 28, 2016 5:39 PM
> To: Wu, Feng
> Cc: andrew.coop...@citrix.com; dario.faggi...@citrix.com;
> george.dun...@eu.citrix.com; Tian, Kevin ; xen-
>
flight 101339 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/101339/
Failures :-/ but no regressions.
Regressions which are regarded as allowable (not blocking):
test-amd64-amd64-xl-qemut-win7-amd64 16 guest-stopfail like 101332
From: He Chao
SMEP/SMAP is a security feature to prevent kernel executing/accessing
user address involuntarily, any such behavior will lead to a page fault.
SMEP/SMAP is open (in CR4) for both Xen and HVM guest in earlier code.
SMEP/SMAP bit set in Xen CR4 would enforce
On 16-09-30 17:29:58, Konrad Rzeszutek Wilk wrote:
> On Thu, Aug 25, 2016 at 01:22:45PM +0800, Yi Sun wrote:
> > This patch is the xl/xc changes to support Intel L2 CAT
> > (Cache Allocation Technology).
> >
> > The new level option is introduced to original CAT setting
> > command in order to
On 16-09-30 17:23:43, Konrad Rzeszutek Wilk wrote:
> On Thu, Sep 22, 2016 at 10:15:44AM +0800, Yi Sun wrote:
> > Add L2 CAT (Cache Allocation Technology) feature support in
> > hypervisor:
> > - Implement 'struct feat_ops' callback functions for L2 CAT
> > and initialize L2 CAT feature and add
Thanks for reviewing the patches! Sorry for late to reply because Oct 1 to 7
is China National Holiday.
On 16-09-30 17:18:33, Konrad Rzeszutek Wilk wrote:
> On Thu, Sep 22, 2016 at 10:15:20AM +0800, Yi Sun wrote:
> > Current psr.c is designed for supporting L3 CAT/CDP. It has many
> > limitations
48 matches
Mail list logo