Re: [Qemu-devel] [PATCH 00/35] pc: ACPI memory hotplug

2014-08-25 Thread Anshul Makkar
Hi,

I am testing memory hotadd/remove functionality for Windows guest
(currently 2012 server). Memory hot remove is not working.

As mentioned in the mail chain, hot remove on Windows is not supported.So
just wanted to check if its still not supported or has been supported or
its a work in progress. If its already been supported or still a work in
progress, please can you share the relevant links/patches.

Sorry, if I have missed any latest patches that support Windows memory hot
remove.

Thanks
Anshul Makkar

On Wed, May 7, 2014 at 11:15 AM, Stefan Priebe - Profihost AG 
s.pri...@profihost.ag wrote:

 ax number of supported DIMM devices 255 (due to ACPI object name
 limit), could be increased creating several containers and putting
 DIMMs there. (exercise for future)



Re: [Qemu-devel] [PATCH 00/35] pc: ACPI memory hotplug

2014-08-25 Thread Paolo Bonzini
Il 25/08/2014 15:28, Anshul Makkar ha scritto:
 
 I am testing memory hotadd/remove functionality for Windows guest
 (currently 2012 server). Memory hot remove is not working.
 
 As mentioned in the mail chain, hot remove on Windows is not
 supported.So just wanted to check if its still not supported or has been
 supported or its a work in progress. If its already been supported or
 still a work in progress, please can you share the relevant links/patches.
 
 Sorry, if I have missed any latest patches that support Windows memory
 hot remove.

Hot remove is not implemented yet.

Paolo



Re: [Qemu-devel] [PATCH 00/35] pc: ACPI memory hotplug

2014-05-07 Thread Stefan Priebe - Profihost AG
Hello Igor,

while testing your patchset i came to a very stupid problem.

I wanted to test migration and it cames out that the migration works
fine after plugging in memory only if i run the target VM without the
-daemonize option.

If i enable the -daemonize option the target vm tries to read from non
readable memory.

proc maps shows:
7f9334021000-7f933800 ---p  00:00 0

where it tries to read from.

Also the memory layout is different in daemonize mode than in non
daemonize mode.

Stefan

Am 04.04.2014 15:36, schrieb Igor Mammedov:
 What's new since v7:
 
 * Per Andreas' suggestion dropped DIMMBus concept.
 
 * Added hotplug binding for bus-less devices
 
 * DIMM device is split to backend and frontend. Therefore following
   command/options were added for supporting it:
 
   For memory-ram backend:
   CLI: -object-add memory-ram,
   with options: 'id' and 'size'
   For dimm frontend:
   option size became readonly, pulling it's size from attached backend
   added option memdev for specifying backend by 'id'
 
 * dropped support for 32 bit guests
 
 * failed hotplug action doesn't consume 1 slot anymore
 
 * vaious fixes adressing reviewer's comments most of them in ACPI part
 ---
 
 This series allows to hotplug 'arbitrary' DIMM devices specifying size,
 NUMA node mapping (guest side), slot and address where to map it, at runtime.
 
 Due to ACPI limitation there is need to specify a number of possible
 DIMM devices. For this task -m option was extended to support
 following format:
 
   -m [mem=]RamSize[,slots=N,maxmem=M]
 
 To allow memory hotplug user must specify a pair of additional parameters:
 'slots' - number of possible increments
 'maxmem' - max possible total memory size QEMU is allowed to use,
including RamSize.
 
 minimal monitor command syntax to hotplug DIMM device:
 
   object_add memory-ram,id=memX,size=1G
   device_add dimm,id=dimmX,memdev=memX
 
 DIMM device provides following properties that could be used with
 device_add / -device to alter default behavior:
 
   id- unique string identifying device [mandatory]
   slot  - number in range [0-slots) [optional], if not specified
   the first free slot is used
   node  - NUMA node id [optional] (default: 0)
   size  - amount of memory to add, readonly derived from backing memdev
   start - guest's physical address where to plug DIMM [optional],
   if not specified the first gap in hotplug memory region
   that fits DIMM is used
 
  -device option could be used for adding potentially hotunplugable DIMMs
 and also for specifying hotplugged DIMMs in migration case.
 
 Tested guests:
  - RHEL 6x64
  - Windows 2012DCx64
  - Windows 2008DCx64
 
 Known limitations/bugs/TODOs:
  - hot-remove is not supported, yet
  - max number of supported DIMM devices 255 (due to ACPI object name
limit), could be increased creating several containers and putting
DIMMs there. (exercise for future) 
  - e820 table doesn't include DIMM devices added with -device /
(or after reboot devices added with device_add)
  - Windows 2008 remembers DIMM configuration, so if DIMM with other
start/size is added into the same slot, it refuses to use it insisting
on old mapping.
 
 QEMU git tree for testing is available at:
   https://github.com/imammedo/qemu/commits/memory-hotplug-v8
 
 Example QEMU cmd line:
   qemu-system-x86_64 -enable-kvm -monitor unix:/tmp/mon,server,nowait \ 
  -m 4096,slots=4,maxmem=8G guest.img
 
 PS:
   Windows guest requires SRAT table for hotplug to work so add an extra 
 option:
-numa node
   to QEMU command line.
 
 
 Igor Mammedov (34):
   vl: convert -m to QemuOpts
   object_add: allow completion handler to get canonical path
   add memdev backend infrastructure
   vl.c: extend -m option to support options for memory hotplug
   add pc-{i440fx,q35}-2.1 machine types
   pc: create custom generic PC machine type
   qdev: hotplug for buss-less devices
   qdev: expose DeviceState.hotplugged field as a property
   dimm: implement dimm device abstraction
   memory: add memory_region_is_mapped() API
   dimm: do not allow to set already busy memdev
   pc: initialize memory hotplug address space
   pc: exit QEMU if slots  256
   pc: add 'etc/reserved-memory-end' fw_cfg interface for SeaBIOS
   pc: add memory hotplug handler to PC_MACHINE
   dimm: add busy address check and address auto-allocation
   dimm: add busy slot check and slot auto-allocation
   acpi: rename cpu_hotplug_defs.h to acpi_defs.h
   acpi: memory hotplug ACPI hardware implementation
   trace: add acpi memory hotplug IO region events
   trace: add DIMM slot  address allocation for target-i386
   acpi:piix4: make plug/unlug callbacks generic
   acpi:piix4: add memory hotplug handling
   pc: ich9 lpc: make it work with global/compat properties
   acpi:ich9: add memory hotplug handling
   pc: migrate piix4  ich9 MemHotplugState
   pc: propagate memory hotplug event to ACPI device

Re: [Qemu-devel] [PATCH 00/35] pc: ACPI memory hotplug

2014-04-07 Thread Igor Mammedov
On Fri, 4 Apr 2014 17:57:28 +0100
Dr. David Alan Gilbert dgilb...@redhat.com wrote:

 * Igor Mammedov (imamm...@redhat.com) wrote:
 
 snip
 
  This series allows to hotplug 'arbitrary' DIMM devices specifying size,
  NUMA node mapping (guest side), slot and address where to map it, at 
  runtime.
 
 Some high level questions:
   1) Is the intention that all guest RAM would be hot pluggable like this
  (i.e. no memory would be allocated in the normal way)
Later, I plan to convert initial memory to DIMM devices as well, but only to not
hotpluggable ones so far for simplicity sake.

   2) Does something stop it being invoked during a migration?
As far as I know, there is no checks to prevent any hotplug op during
migration. Considering how migration currently works, hotplug should be
disabled at migration time.

 
 Dave
 --
 Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK




Re: [Qemu-devel] [PATCH 00/35] pc: ACPI memory hotplug

2014-04-04 Thread Igor Mammedov
On Fri, 04 Apr 2014 16:07:53 +0200
Paolo Bonzini pbonz...@redhat.com wrote:

 Il 04/04/2014 15:36, Igor Mammedov ha scritto:
 
  * dropped support for 32 bit guests
 
 Can you explain this more?
v7 had ability to map hotplugged DIMMs below 4Gb, but Gerd suggested to drop
it since it consume precious lowmem for PCI devices. This version maps
DIMM devices beyond above4gb memory. So dropped support for 32 bit guests
here means that if it can't handle GPA above 4Gb and 64bit _CRS, it won't have
working memory cold/hot-plug DIMM devices.

 
  PS:
Windows guest requires SRAT table for hotplug to work so add an extra 
  option:
 -numa node
to QEMU command line.
 
 Should we consider always exposing a SRAT for 2.1+ machine types?
That certainly would help management not have to worry about it.

 
 Paolo


-- 
Regards,
  Igor



Re: [Qemu-devel] [PATCH 00/35] pc: ACPI memory hotplug

2014-04-04 Thread Paolo Bonzini

Il 04/04/2014 16:24, Igor Mammedov ha scritto:

 Can you explain this more?

v7 had ability to map hotplugged DIMMs below 4Gb, but Gerd suggested to drop
it since it consume precious lowmem for PCI devices. This version maps
DIMM devices beyond above4gb memory. So dropped support for 32 bit guests
here means that if it can't handle GPA above 4Gb and 64bit _CRS, it won't have
working memory cold/hot-plug DIMM devices.


Ok, so PAE should work.  Just thinking ahead about release notes. :)

Paolo



Re: [Qemu-devel] [PATCH 00/35] pc: ACPI memory hotplug

2014-04-04 Thread Igor Mammedov
On Fri, 04 Apr 2014 17:19:50 +0200
Paolo Bonzini pbonz...@redhat.com wrote:

 Il 04/04/2014 16:24, Igor Mammedov ha scritto:
   Can you explain this more?
 
  v7 had ability to map hotplugged DIMMs below 4Gb, but Gerd suggested to drop
  it since it consume precious lowmem for PCI devices. This version maps
  DIMM devices beyond above4gb memory. So dropped support for 32 bit guests
  here means that if it can't handle GPA above 4Gb and 64bit _CRS, it won't 
  have
  working memory cold/hot-plug DIMM devices.
 
 Ok, so PAE should work.  Just thinking ahead about release notes. :)
it should, but to confirm I'm installing WS2003EE for the last hour to see if
it works with high mem.

 
 Paolo


-- 
Regards,
  Igor



Re: [Qemu-devel] [PATCH 00/35] pc: ACPI memory hotplug

2014-04-04 Thread Dr. David Alan Gilbert
* Igor Mammedov (imamm...@redhat.com) wrote:

snip

 This series allows to hotplug 'arbitrary' DIMM devices specifying size,
 NUMA node mapping (guest side), slot and address where to map it, at runtime.

Some high level questions:
  1) Is the intention that all guest RAM would be hot pluggable like this
 (i.e. no memory would be allocated in the normal way)
  2) Does something stop it being invoked during a migration?

Dave
--
Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK