On 05/28/21 01:06, Haozhong Zhang wrote:
> The current implementation leaves 0 in the maximum link width (MLW)
> and speed (MLS) fields of the PCI_EXP_LNKCAP register of a xio3130
> downstream port device. As a consequence, when that downstream port
> negotiates the link width and sp
setting MLW and MLS in
PCI_EXP_LNKCAP of the xio3130 downstream port to values defined in its
data manual, i.e., x1 and 2.5 GT respectively.
Signed-off-by: Haozhong Zhang
---
hw/pci-bridge/xio3130_downstream.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/hw/pci-bridge/xio3130_downstrea
On 06/15/18 16:04, David Hildenbrand wrote:
> It is inititally 0, so setting it to 0 should be allowed, too.
I'm fine with this change and believe nothing is broken in practice,
but what is expected by the user who sets a zero label size?
Look at nvdimm_dsm_device() which enables label DSMs only
On 06/11/18 19:55, Dan Williams wrote:
> On Mon, Jun 11, 2018 at 9:26 AM, Stefan Hajnoczi wrote:
> > On Mon, Jun 11, 2018 at 06:54:25PM +0800, Zhang Yi wrote:
> >> Nvdimm driver use Memory hot-plug APIs to map it's pmem resource,
> >> which at a section granularity.
> >>
> >> When QEMU emulated
On 03/29/18 19:59 +0100, Dr. David Alan Gilbert wrote:
> * Haozhong Zhang (haozhong.zh...@intel.com) wrote:
> > When loading a zero page, check whether it will be loaded to
> > persistent memory If yes, load it by libpmem function
> > pmem_memset_nodrain(). Combined with
On 03/29/18 20:12 +0100, Dr. David Alan Gilbert wrote:
> * Haozhong Zhang (haozhong.zh...@intel.com) wrote:
>
>
>
> > Post-copy with NVDIMM currently fails with message "Postcopy on shared
> > RAM (...) is not yet supported". Is it enough?
>
> What does
On 03/12/18 15:39 +, Stefan Hajnoczi wrote:
> On Wed, Feb 28, 2018 at 03:25:50PM +0800, Haozhong Zhang wrote:
> > QEMU writes to vNVDIMM backends in the vNVDIMM label emulation and
> > live migration. If the backend is on the persistent memory, QEMU needs
> > to t
Reviewers can use ACPI tables in this patch to run
test_acpi_{piix4,q35}_tcg_dimm_pxm cases.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
tests/acpi-test-data/pc/APIC.dimmpxm | Bin 0 -> 144 bytes
tests/acpi-test-data/pc/DSDT.dimmpxm | Bin 0 -> 6803 bytes
tests/ac
ng code.
* (Patch 3) s/'static-plugged'/'present at boot time' in commit message.
Changes in v2:
* Build SRAT memory affinity structures of PC-DIMM devices as well.
* Add test cases.
Haozhong Zhang (5):
pc-dimm: make qmp_pc_dimm_device_list() sort devices by address
qmp: distinguish
-data/q35/NFIT.dimmpxm
tests/acpi-test-data/q35/SRAT.dimmpxm
tests/acpi-test-data/q35/SSDT.dimmpxm
New APIC and DSDT are needed because of the multiple processors
configuration. New NFIT and SSDT are needed because of NVDIMM.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Sug
ity domain of the last
node as before.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hw/i386/acpi-build.c | 56
1 file changed, 52 insertions(+), 4 deletions(-)
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
(qdev_get_machine(), );
could be replaced with simpler:
list = qmp_pc_dimm_device_list();
* follow up patch will use it in build_srat()
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Reviewed-by: Igor Mammedov <imamm...@redhat.com>
Acked-by: David Gibson <da...@gibson.dropbear
On 03/10/18 20:31 -0600, Eric Blake wrote:
> On 03/10/2018 07:34 PM, Haozhong Zhang wrote:
> > It may need to treat PC-DIMM and NVDIMM differently, e.g., when
> > deciding the necessity of non-volatile flag bit in SRAT memory
> > affinity structures.
> >
> > NVDI
necessary in the future.
It also fixes "info memory-devices"/query-memory-devices which
currently show nvdimm devices as dimm devices since
object_dynamic_cast(obj, TYPE_PC_DIMM) happily cast nvdimm to
TYPE_PC_DIMM which it's been inherited from.
Signed-off-by: Haozhong Zhang &l
Reviewers can use ACPI tables in this patch to run
test_acpi_{piix4,q35}_tcg_dimm_pxm cases.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
tests/acpi-test-data/pc/APIC.dimmpxm | Bin 0 -> 144 bytes
tests/acpi-test-data/pc/DSDT.dimmpxm | Bin 0 -> 6803 bytes
tests/ac
-data/q35/NFIT.dimmpxm
tests/acpi-test-data/q35/SRAT.dimmpxm
tests/acpi-test-data/q35/SSDT.dimmpxm
New APIC and DSDT are needed because of the multiple processors
configuration. New NFIT and SSDT are needed because of NVDIMM.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Sug
ity domain of the last
node as before.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hw/i386/acpi-build.c | 56
1 file changed, 52 insertions(+), 4 deletions(-)
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
(qdev_get_machine(), );
could be replaced with simpler:
list = qmp_pc_dimm_device_list();
* follow up patch will use it in build_srat()
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Reviewed-by: Igor Mammedov <imamm...@redhat.com>
Acked-by: David Gibson <da...@gibson.dropbear
esent at boot time' in commit message.
Changes in v2:
* Build SRAT memory affinity structures of PC-DIMM devices as well.
* Add test cases.
Haozhong Zhang (5):
pc-dimm: make qmp_pc_dimm_device_list() sort devices by address
qmp: distinguish PC-DIMM and NVDIMM in MemoryDeviceInfoList
hw/acpi-build:
it's been inherited from.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hmp.c| 14 +++---
hw/mem/pc-dimm.c | 10 +-
numa.c | 19 +--
qapi/misc.json | 6 +-
4 files changed, 38 insertions(+), 11 deletions(-)
diff --git
On 03/08/18 11:22 -0600, Eric Blake wrote:
> On 03/07/2018 08:33 PM, Haozhong Zhang wrote:
> > It may need to treat PC-DIMM and NVDIMM differently, e.g., when
> > deciding the necessity of non-volatile flag bit in SRAT memory
> > affinity structures.
> >
> > NVDI
Ping?
On 02/28/18 15:25 +0800, Haozhong Zhang wrote:
> QEMU writes to vNVDIMM backends in the vNVDIMM label emulation and
> live migration. If the backend is on the persistent memory, QEMU needs
> to take proper operations to ensure its writes persistent on the
> persistent memor
On 03/08/18 10:33 +0800, Haozhong Zhang wrote:
> (Patch 5 is only for reviewers to run test cases in patch 4)
>
> ACPI 6.2A Table 5-129 "SPA Range Structure" requires the proximity
> domain of a NVDIMM SPA range must match with corresponding entry in
> SRAT table
it's been inherited from.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hmp.c| 14 +++---
hw/mem/pc-dimm.c | 20 ++--
numa.c | 19 +--
qapi/misc.json | 18 +-
4 files changed, 59 insertions(+), 12 de
ity domain of the last
node as before.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hw/i386/acpi-build.c | 57
1 file changed, 53 insertions(+), 4 deletions(-)
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
-data/q35/NFIT.dimmpxm
tests/acpi-test-data/q35/SRAT.dimmpxm
tests/acpi-test-data/q35/SSDT.dimmpxm
New APIC and DSDT are needed because of the multiple processors
configuration. New NFIT and SSDT are needed because of NVDIMM.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Sug
Reviewers can use ACPI tables in this patch to run
test_acpi_{piix4,q35}_tcg_dimm_pxm cases.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
tests/acpi-test-data/pc/APIC.dimmpxm | Bin 0 -> 144 bytes
tests/acpi-test-data/pc/DSDT.dimmpxm | Bin 0 -> 6803 bytes
tests/ac
y structures of PC-DIMM devices as well.
* Add test cases.
Haozhong Zhang (5):
pc-dimm: make qmp_pc_dimm_device_list() sort devices by address
qmp: distinguish PC-DIMM and NVDIMM in MemoryDeviceInfoList
hw/acpi-build: build SRAT memory affinity structures for DIMM devices
tests/bios-tab
(qdev_get_machine(), );
could be replaced with simpler:
list = qmp_pc_dimm_device_list();
* follow up patch will use it in build_srat()
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Reviewed-by: Igor Mammedov <imamm...@redhat.com>
---
hw/mem/pc-dimm.c
On 03/02/18 12:03 +, Anthony PERARD wrote:
> On Wed, Feb 28, 2018 at 05:36:59PM +0800, Haozhong Zhang wrote:
> > On 02/27/18 17:22 +, Anthony PERARD wrote:
> > > On Thu, Dec 07, 2017 at 06:18:02PM +0800, Haozhong Zhang wrote:
> > > > This is th
On 03/05/18 13:14 -0600, Eric Blake wrote:
> On 03/05/2018 12:57 AM, Haozhong Zhang wrote:
> > It may need to treat PC-DIMM and NVDIMM differently, e.g., when
> > deciding the necessity of non-volatile flag bit in SRAT memory
> > affinity structures.
> >
> > NVDI
On 03/02/18 11:50 +, Anthony PERARD wrote:
> On Wed, Feb 28, 2018 at 03:56:54PM +0800, Haozhong Zhang wrote:
> > On 02/27/18 16:41 +, Anthony PERARD wrote:
> > > On Thu, Dec 07, 2017 at 06:18:05PM +0800, Haozhong Zhang wrote:
> > > > @@ -108,7 +109,
Use pc_dimm_built_list to hide recursive callbacks from callers.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hw/mem/pc-dimm.c | 83 +---
hw/ppc/spapr.c | 3 +-
include/hw/mem/pc-dimm.h | 2 +-
of the last node.
Add test cases on PC and Q35 machines with 3 proximity domains, and
one PC-DIMM and one NVDIMM attached to the second proximity domain.
Check whether the QEMU-built SRAT tables match with the expected ones.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Suggested-by
ity domain of the last
node as before.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hw/i386/acpi-build.c | 60
1 file changed, 56 insertions(+), 4 deletions(-)
diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
static-plugged'/'present at boot time' in commit message.
Changes in v2:
* Build SRAT memory affinity structures of PC-DIMM devices as well.
* Add test cases.
Haozhong Zhang (5):
pc-dimm: refactor qmp_pc_dimm_device_list
qmp: distinguish PC-DIMM and NVDIMM in MemoryDeviceInfoList
hw/acpi-bui
Some test cases may require extra machine options than those used in
the current test_acpi_ones(), e.g., nvdimm test cases require the
machine option 'nvdimm=on'.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
tests/bios-tables-test.
-specific data is currently left empty and will be filled
when necessary in the future.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hmp.c| 14 +++---
hw/mem/pc-dimm.c | 20 ++--
numa.c | 19 +--
qapi-schema.jso
On 03/01/18 14:01 +0100, Igor Mammedov wrote:
> On Thu, 1 Mar 2018 19:56:51 +0800
> Haozhong Zhang <haozhong.zh...@intel.com> wrote:
>
> > On 03/01/18 11:42 +0100, Igor Mammedov wrote:
> > > On Wed, 28 Feb 2018 12:02:58 +0800
> > > Haozhong
On 03/01/18 11:42 +0100, Igor Mammedov wrote:
> On Wed, 28 Feb 2018 12:02:58 +0800
> Haozhong Zhang <haozhong.zh...@intel.com> wrote:
>
> > ACPI 6.2A Table 5-129 "SPA Range Structure" requires the proximity
> > domain of a NVDIMM SPA range must match with
On 02/27/18 17:22 +, Anthony PERARD wrote:
> On Thu, Dec 07, 2017 at 06:18:02PM +0800, Haozhong Zhang wrote:
> > This is the QEMU part patches that works with the associated Xen
> > patches to enable vNVDIMM support for Xen HVM domains. Xen relies on
> > QEMU to build
On 02/27/18 16:46 +, Anthony PERARD wrote:
> On Thu, Dec 07, 2017 at 06:18:07PM +0800, Haozhong Zhang wrote:
> > Xen is going to reuse QEMU to build ACPI of some devices (e.g., NFIT
> > and SSDT for NVDIMM) for HVM domains. The existing QEMU ACPI build
> > code requir
On 02/27/18 16:41 +, Anthony PERARD wrote:
> On Thu, Dec 07, 2017 at 06:18:05PM +0800, Haozhong Zhang wrote:
> > diff --git a/backends/hostmem.c b/backends/hostmem.c
> > index ee2c2d5bfd..ba13a52994 100644
> > --- a/backends/hostmem.c
> > +++ b/backends/h
On 02/27/18 16:37 +, Anthony PERARD wrote:
> On Thu, Dec 07, 2017 at 06:18:04PM +0800, Haozhong Zhang wrote:
> > The guest physical address of vNVDIMM is allocated from the hotplug
> > memory region, which is not created when QEMU is used as Xen device
> > model. In
When loading a normal page to persistent memory, load its data by
libpmem function pmem_memcpy_nodrain() instead of memcpy(). Combined
with a call to pmem_drain() at the end of memory loading, we can
guarantee all those normal pages are persistenly loaded to PMEM.
Signed-off-by: Haozhong Zhang
On 02/28/18 15:25 +0800, Haozhong Zhang wrote:
> QEMU writes to vNVDIMM backends in the vNVDIMM label emulation and
> live migration. If the backend is on the persistent memory, QEMU needs
> to take proper operations to ensure its writes persistent on the
> persistent memory. Other
configurations.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hw/mem/nvdimm.c | 9 -
include/qemu/pmem.h | 23 +++
stubs/Makefile.objs | 1 +
stubs/pmem.c| 19 +++
4 files changed, 51 insertions(+), 1 deletion(-)
creat
When loading a compressed page to persistent memory, flush CPU cache
after the data is decompressed. Combined with a call to pmem_drain()
at the end of memory loading, we can guarantee those compressed pages
are persistently loaded to PMEM.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.
-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
migration/ram.c| 6 +++---
migration/xbzrle.c | 8 ++--
migration/xbzrle.h | 3 ++-
tests/Makefile.include | 2 +-
tests/test-xbzrle.c| 4 ++--
5 files changed, 14 insertions(+), 9 deletions(-)
diff --git a/migration/r
(formerly known as NMVL), https://github.com/pmem/pmdk/
[2]
https://github.com/pmem/pmdk/blob/38bfa652721a37fd94c0130ce0e3f5d8baa3ed40/src/libpmem/pmem.c#L33
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
configure | 35 +++
1 file chang
configurations, pmem_drain() can be
"sfence". Therefore, we do not call pmem_drain() after each
pmem_memset_nodrain(), or use pmem_memset_persist() (equally
pmem_memset_nodrain() + pmem_drain()), in order to avoid unnecessary
overhead.
Signed-off-by: Haozhong Zhang <haozhong.zh
' flag is converted to the QEMU_RAM_SHARE bit in
flags, and other flag bits are ignored by above functions right now.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
backends/hostmem-file.c | 3 ++-
exec.c | 7 ---
include/exec/memory.h
the backend
storage of memory-backend-file is a real persistent memory. If
'pmem=on', QEMU will set the flag RAM_PMEM in the RAM block of the
corresponding memory region.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
backends/hostmem-file.c | 26 +-
tion 'pmem' to hostmem-file.
* (Patch 3) Use libpmem to operate on the persistent memory, rather
than re-implementing those operations in QEMU.
* (Patch 5-8) Consider the write persistence in the migration path.
Haozhong Zhang (8):
[1/8] memory, exec: switch file ram allocation functions
of the last node.
Add test cases on PC and Q35 machines with 3 proximity domains, and
one PC-DIMM and one NVDIMM attached to the second proximity domain.
Check whether the QEMU-built SRAT tables match with the expected ones.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Suggested-by
ity domain of the last
node as before.
Changes in v2:
* Build SRAT memory affinity structures of PC-DIMM devices as well.
* Add test cases.
Haozhong Zhang (3):
hw/acpi-build: build SRAT memory affinity structures for DIMM devices
tests/bios-tables-test: allow setting extra machine options
tests/b
Some test cases may require extra machine options than the those
used in the current test_acpi_ones(), e.g., nvdimm test cases require
the machine option 'nvdimm=on'.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
tests/bios-tables-test.
ity domain of the last
node as before.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hw/i386/acpi-build.c | 50
hw/mem/pc-dimm.c | 8
include/hw/mem/pc-dimm.h | 10 ++
3 files changed, 64 insertions(+), 4 de
On 02/26/18 14:59 +0100, Igor Mammedov wrote:
> On Thu, 22 Feb 2018 09:40:00 +0800
> Haozhong Zhang <haozhong.zh...@intel.com> wrote:
>
> > On 02/21/18 14:55 +0100, Igor Mammedov wrote:
> > > On Tue, 20 Feb 2018 17:17:58 -0800
> > > Dan Wi
Hi Fam,
On 02/23/18 17:17 -0800, no-re...@patchew.org wrote:
> Hi,
>
> This series failed build test on s390x host. Please find the details below.
>
> N/A. Internal error while reading log file
What does this message mean? Where can I get the log file?
Thanks,
Haozhong
On 02/21/18 14:55 +0100, Igor Mammedov wrote:
> On Tue, 20 Feb 2018 17:17:58 -0800
> Dan Williams <dan.j.willi...@intel.com> wrote:
>
> > On Tue, Feb 20, 2018 at 6:10 AM, Igor Mammedov <imamm...@redhat.com> wrote:
> > > On Sat, 17 Feb 2018 14:31:35 +08
y affinity structure for each NVDIMM device with the
proximity domain used in NFIT. The remaining hot-pluggable address
space is covered by one or multiple SRAT memory affinity structures
with the proximity domain of the last node as before.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel
When loading a normal page to persistent memory, load its data by
libpmem function pmem_memcpy_nodrain() instead of memcpy(). Combined
with a call to pmem_drain() at the end of memory loading, we can
guarantee all those normal pages are persistenly loaded to PMEM.
Signed-off-by: Haozhong Zhang
configurations, pmem_drain() can be
"sfence". Therefore, we do not call pmem_drain() after each
pmem_memset_nodrain(), or use pmem_memset_persist() (equally
pmem_memset_nodrain() + pmem_drain()), in order to avoid unnecessary
overhead.
Signed-off-by: Haozhong Zhang <haozhong.zh
-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
migration/ram.c| 6 +++---
migration/xbzrle.c | 8 ++--
migration/xbzrle.h | 3 ++-
tests/Makefile.include | 2 +-
tests/test-xbzrle.c| 4 ++--
5 files changed, 14 insertions(+), 9 deletions(-)
diff --git a/migration/r
configurations.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hw/mem/nvdimm.c | 9 -
include/qemu/pmem.h | 23 +++
stubs/Makefile.objs | 1 +
stubs/pmem.c| 19 +++
4 files changed, 51 insertions(+), 1 deletion(-)
creat
When loading a compressed page to persistent memory, flush CPU cache
after the data is decompressed. Combined with a call to pmem_drain()
at the end of memory loading, we can guarantee those compressed pages
are persistently loaded to PMEM.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.
ns in QEMU.
* (Patch 5-8) Consider the write persistence in the migration path.
Haozhong Zhang (8):
[1/8] memory, exec: switch file ram allocation functions to 'flags' parameters
[2/8] hostmem-file: add the 'pmem' option
[3/8] configure: add libpmem support
[4/8] mem/nvdimm: ensure write p
(formerly known as NMVL), https://github.com/pmem/pmdk/
[2]
https://github.com/pmem/pmdk/blob/38bfa652721a37fd94c0130ce0e3f5d8baa3ed40/src/libpmem/pmem.c#L33
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
configure | 35 +++
1 file chang
' flag is converted to the QEMU_RAM_SHARE bit in
flags, and other flag bits are ignored by above functions right now.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
backends/hostmem-file.c | 3 ++-
exec.c | 7 ---
include/exec/memory.h
the backend
storage of memory-backend-file is a real persistent memory. If
'pmem=on', QEMU will set the flag RAM_PMEM in the RAM block of the
corresponding memory region.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
backends/hostmem-file.c | 26 +-
On 02/09/18 14:27 +, Stefan Hajnoczi wrote:
> On Wed, Feb 07, 2018 at 03:33:27PM +0800, Haozhong Zhang wrote:
> > @@ -156,11 +157,17 @@ static void nvdimm_write_label_data(NVDIMMDevice
> > *nvdimm, const void *buf,
> > {
> > MemoryRegion *mr;
> > PC
On 02/07/18 13:03 +, Dr. David Alan Gilbert wrote:
> * Haozhong Zhang (haozhong.zh...@intel.com) wrote:
> > On 02/07/18 11:54 +, Dr. David Alan Gilbert wrote:
> > > * Haozhong Zhang (haozhong.zh...@intel.com) wrote:
> > > > When loading a compressed page t
On 02/07/18 19:52 +0800, Haozhong Zhang wrote:
> On 02/07/18 11:38 +, Dr. David Alan Gilbert wrote:
> > * Haozhong Zhang (haozhong.zh...@intel.com) wrote:
> > > When loading a zero page, check whether it will be loaded to
> > > persistent memory If yes,
On 02/07/18 11:54 +, Dr. David Alan Gilbert wrote:
> * Haozhong Zhang (haozhong.zh...@intel.com) wrote:
> > When loading a compressed page to persistent memory, flush CPU cache
> > after the data is decompressed. Combined with a call to pmem_drain()
> > at the end of m
On 02/07/18 11:49 +, Dr. David Alan Gilbert wrote:
> * Haozhong Zhang (haozhong.zh...@intel.com) wrote:
> > When loading a normal page to persistent memory, load its data by
> > libpmem function pmem_memcpy_nodrain() instead of memcpy(). Combined
> > with a call to pm
On 02/07/18 11:38 +, Dr. David Alan Gilbert wrote:
> * Haozhong Zhang (haozhong.zh...@intel.com) wrote:
> > When loading a zero page, check whether it will be loaded to
> > persistent memory If yes, load it by libpmem function
> > pmem_memset_nodrain(). Combined with
But
> I also see empty definition. Anything I am missing here?
Functions defined in include/qemu/pmem.h are stubs and used only when
QEMU is not compiled with libpmem. When QEMU is compiled with
--enabled-libpmem, the one in libpmem is used.
Haozhong
>
> Thanks,
> Pank
configurations.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
hw/mem/nvdimm.c | 9 -
include/qemu/pmem.h | 31 +++
2 files changed, 39 insertions(+), 1 deletion(-)
create mode 100644 include/qemu/pmem.h
diff --git a/hw/mem/nvdimm.c b/
(formerly known as NMVL), https://github.com/pmem/pmdk/
[2]
https://github.com/pmem/pmdk/blob/38bfa652721a37fd94c0130ce0e3f5d8baa3ed40/src/libpmem/pmem.c#L33
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
configure | 35 +++
1 file chang
When loading a normal page to persistent memory, load its data by
libpmem function pmem_memcpy_nodrain() instead of memcpy(). Combined
with a call to pmem_drain() at the end of memory loading, we can
guarantee all those normal pages are persistenly loaded to PMEM.
Signed-off-by: Haozhong Zhang
-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
migration/ram.c| 15 ++-
migration/xbzrle.c | 20 ++--
migration/xbzrle.h | 1 +
3 files changed, 29 insertions(+), 7 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 924d2b9537..87f977617d
the backend
storage of memory-backend-file is a real persistent memory. If
'pmem=on', QEMU will set the flag RAM_PMEM in the RAM block of the
corresponding memory region.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
backends/hostmem-file.c | 26 +-
configurations, pmem_drain() can be
"sfence". Therefore, we do not call pmem_drain() after each
pmem_memset_nodrain(), or use pmem_memset_persist() (equally
pmem_memset_nodrain() + pmem_drain()), in order to avoid unnecessary
overhead.
Signed-off-by: Haozhong Zhang <haozhong.zh
When loading a compressed page to persistent memory, flush CPU cache
after the data is decompressed. Combined with a call to pmem_drain()
at the end of memory loading, we can guarantee those compressed pages
are persistently loaded to PMEM.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.
' flag is converted to the QEMU_RAM_SHARE bit in
flags, and other flag bits are ignored by above functions right now.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
backends/hostmem-file.c | 3 ++-
exec.c | 7 ---
include/exec/memory.h
than re-implementing those operations in QEMU.
* (Patch 5-8) Consider the write persistence in the migration path.
Haozhong Zhang (8):
[1/8] memory, exec: switch file ram allocation functions to 'flags' parameters
[2/8] hostmem-file: add the 'pmem' option
[3/8] configure: add libpmem support
On 01/31/18 19:02 -0800, Dan Williams wrote:
> On Wed, Jan 31, 2018 at 6:29 PM, Haozhong Zhang
> <haozhong.zh...@intel.com> wrote:
> > + vfio maintainer Alex Williamson in case my understanding of vfio is
> > incorrect.
> >
> > On 01/31/18 16:32 -0800, Dan
+ vfio maintainer Alex Williamson in case my understanding of vfio is incorrect.
On 01/31/18 16:32 -0800, Dan Williams wrote:
> On Wed, Jan 31, 2018 at 4:24 PM, Haozhong Zhang
> <haozhong.zh...@intel.com> wrote:
> > On 01/31/18 16:08 -0800, Dan Williams wrote:
> >> On W
On 01/31/18 16:08 -0800, Dan Williams wrote:
> On Wed, Jan 31, 2018 at 4:02 PM, Haozhong Zhang
> <haozhong.zh...@intel.com> wrote:
> > On 01/31/18 14:25 -0800, Dan Williams wrote:
> >> On Tue, Jan 30, 2018 at 10:02 PM, Haozhong Zhang
> >> <haozhong.zh
On 01/31/18 14:25 -0800, Dan Williams wrote:
> On Tue, Jan 30, 2018 at 10:02 PM, Haozhong Zhang
> <haozhong.zh...@intel.com> wrote:
> > Linux 4.15 introduces a new mmap flag MAP_SYNC, which can be used to
> > guarantee the write persistence to mmap'ed files supporting DAX
When there are multiple memory backends in use, including the object type
name, ID and the property name in the error message can help users to
locate the error.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Suggested-by: "Dr. David Alan Gilbert" <dgilb...@r
.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
include/exec/memory.h | 26 ++
include/exec/ram_addr.h | 4
include/qemu/mmap-alloc.h | 4
include/standard-headers/linux/mman.
As more flag parameters besides the existing 'share' are going to be
added to qemu_ram_alloc_from_{file,fd}(), let's swith 'share' to a
'flags' parameters in advance, so as to ease the further additions.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
exec.c
tion to control the use of MAP_SYNC. (Eduardo Habkost)
* Remove the unnecessary set of MAP_SHARED_VALIDATE in some cases and
the retry mechanism in qemu_ram_mmap(). (Michael S. Tsirkin)
* Move OS dependent definitions of MAP_SYNC and MAP_SHARED_VALIDATE
to osdep.h. (Michael S. Tsirkin)
Haozhong Z
As more flag parameters besides the existing 'shared' are going to be
added to qemu_ram_mmap(), let's switch 'shared' to a 'flags' parameter
in advance, so as to ease the further additions.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Suggested-by: "Michael S.
As more flag parameters besides the existing 'share' are going to be
added to memory_region_init_ram_from_file(), let's switch 'share' to
a 'flags' parameter in advance, so as to ease the further additions.
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
---
backends/hostmem-file
as if
'sync=on'; otherwise, work as if 'sync=off'
Signed-off-by: Haozhong Zhang <haozhong.zh...@intel.com>
Suggested-by: Eduardo Habkost <ehabk...@redhat.com>
Reviewed-by: Michael S. Tsirkin <m...@redhat.com>
---
backends/hostmem-file.c | 41
On 01/24/18 22:23 +0200, Michael S. Tsirkin wrote:
> On Wed, Jan 17, 2018 at 04:13:25PM +0800, Haozhong Zhang wrote:
> > This option controls whether QEMU mmap(2) the memory backend file with
> > MAP_SYNC flag, which can fully guarantee the guest write persistence
> > to the
On 01/24/18 22:20 +0200, Michael S. Tsirkin wrote:
> > index 50385e3f81..dd5876471f 100644
> > --- a/include/qemu/mmap-alloc.h
> > +++ b/include/qemu/mmap-alloc.h
> > @@ -7,7 +7,8 @@ size_t qemu_fd_getpagesize(int fd);
> >
> > size_t qemu_mempath_getpagesize(const char *mem_path);
> >
> >
1 - 100 of 394 matches
Mail list logo