Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-11 Thread Jiri Denemark
On Thu, Jun 11, 2015 at 09:38:24 +0800, zhang bo wrote:
 On 2015/6/10 17:31, Daniel P. Berrange wrote:
 
  On Wed, Jun 10, 2015 at 10:28:08AM +0100, Daniel P. Berrange wrote:
  On Wed, Jun 10, 2015 at 05:24:50PM +0800, zhang bo wrote:
  On 2015/6/10 16:39, Vasiliy Tolstov wrote:
 
  2015-06-10 11:37 GMT+03:00 Daniel P. Berrange berra...@redhat.com:
  The udev rules are really something the OS vendor should setup, so
  that it just works
 
 
  I think so, also for vcpu hotplug this also covered by udev. May be we
  need something to hot remove memory and cpu, because in guest we need
  offline firstly.
 
 
 
  In fact ,we also have --guest option for 'virsh sevvcpus' command, which 
  also
  uses qga commands to do the logical hotplug/unplug jobs, although udev 
  rules seems
  to cover the vcpu logical hotplug issue.
 
  virsh # help setvcpus
  .
  --guest  modify cpu state in the guest
 
 
  BTW: we didn't see OSes with udev rules for memory-hotplug-event setted 
  by vendors, 
  and adding such rules means that we have to *interfere within the guest*, 
  It seems 
  not a good option.
 
  I was suggesting that an RFE be filed with any vendor who doesn't do it
  to add this capability, not that we add udev rules ourselves.
  
  Or actually, it probably is sufficient to just send a patch to the upstream
  systemd project to add the desired rule to udev. That way all Linux distros
  will inherit the feature when they update to new udev.
  
 
 Then, here comes the question: how to deal with the guests that are already 
 in use?
 I think it's better to operate them at the host side without getting into the 
 guest.
 That's the advantage of qemu-guest-agent, why not take advantage of it?

Such guests would need an update qemu-guest-agent anyway. And installing
a new version of qemu-guest-agent is not any easier than installing an
updated udev or a new udev rule. That is, I don't think the
qemu-guest-agent way has any benefits over a udev rule. It's rather the
opposite.

Jirka

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-11 Thread Vasiliy Tolstov
2015-06-11 11:42 GMT+03:00 Jiri Denemark jdene...@redhat.com:
 Such guests would need an update qemu-guest-agent anyway. And installing
 a new version of qemu-guest-agent is not any easier than installing an
 updated udev or a new udev rule. That is, I don't think the
 qemu-guest-agent way has any benefits over a udev rule. It's rather the
 opposite.


May be as workaround install with qemu-ga (if os old) udev rules for
cpu/memory hotplug? So we have udev rules that do all the work, and
packagers can enable/disable installing rules ?

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-10 Thread Daniel P. Berrange
On Wed, Jun 10, 2015 at 10:28:08AM +0100, Daniel P. Berrange wrote:
 On Wed, Jun 10, 2015 at 05:24:50PM +0800, zhang bo wrote:
  On 2015/6/10 16:39, Vasiliy Tolstov wrote:
  
   2015-06-10 11:37 GMT+03:00 Daniel P. Berrange berra...@redhat.com:
   The udev rules are really something the OS vendor should setup, so
   that it just works
   
   
   I think so, also for vcpu hotplug this also covered by udev. May be we
   need something to hot remove memory and cpu, because in guest we need
   offline firstly.
   
  
  
  In fact ,we also have --guest option for 'virsh sevvcpus' command, which 
  also
  uses qga commands to do the logical hotplug/unplug jobs, although udev 
  rules seems
  to cover the vcpu logical hotplug issue.
  
  virsh # help setvcpus
  .
  --guest  modify cpu state in the guest
  
  
  BTW: we didn't see OSes with udev rules for memory-hotplug-event setted by 
  vendors, 
  and adding such rules means that we have to *interfere within the guest*, 
  It seems 
  not a good option.
 
 I was suggesting that an RFE be filed with any vendor who doesn't do it
 to add this capability, not that we add udev rules ourselves.

Or actually, it probably is sufficient to just send a patch to the upstream
systemd project to add the desired rule to udev. That way all Linux distros
will inherit the feature when they update to new udev.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-10 Thread Daniel P. Berrange
On Wed, Jun 10, 2015 at 02:05:16PM +0800, zhang bo wrote:
 On 2015/6/10 13:40, Vasiliy Tolstov wrote:
 
  2015-06-10 5:28 GMT+03:00 zhang bo oscar.zhan...@huawei.com:
  Thank you for your reply.
  Before this patch, we needed to manually online memory blocks inside the 
  guest, after dimm memory hotplug
  for most *nix OSes. (Windows guests automatically get their memory blocks 
  online after hotplugging)
  That is to say, we need to LOGICALLY hotplug memory after PHYSICAL hotplug.
  This patch did the LOGICAL part.
  With this patch, we don't need to get into the guest to manually online 
  them anymore, which is even
  impossible for most host administrators.
  
  
  As i remember this online step easy can be automate via udev rules.
  
 
 
 Logically that's true, but adding udev rules means:
 1 you have to get into the guest
 2 you have to be familar with udev rules.
 
 Not convenient enough compared to just calling libvirt API to do so.

The udev rules are really something the OS vendor should setup, so
that it just works

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-10 Thread Vasiliy Tolstov
2015-06-10 11:37 GMT+03:00 Daniel P. Berrange berra...@redhat.com:
 The udev rules are really something the OS vendor should setup, so
 that it just works


I think so, also for vcpu hotplug this also covered by udev. May be we
need something to hot remove memory and cpu, because in guest we need
offline firstly.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-10 Thread zhang bo
On 2015/6/10 16:39, Vasiliy Tolstov wrote:

 2015-06-10 11:37 GMT+03:00 Daniel P. Berrange berra...@redhat.com:
 The udev rules are really something the OS vendor should setup, so
 that it just works
 
 
 I think so, also for vcpu hotplug this also covered by udev. May be we
 need something to hot remove memory and cpu, because in guest we need
 offline firstly.
 


In fact ,we also have --guest option for 'virsh sevvcpus' command, which also
uses qga commands to do the logical hotplug/unplug jobs, although udev rules 
seems
to cover the vcpu logical hotplug issue.

virsh # help setvcpus
.
--guest  modify cpu state in the guest


BTW: we didn't see OSes with udev rules for memory-hotplug-event setted by 
vendors, 
and adding such rules means that we have to *interfere within the guest*, It 
seems 
not a good option.

-- 
Oscar
oscar.zhan...@huawei.com  

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-10 Thread Daniel P. Berrange
On Wed, Jun 10, 2015 at 05:24:50PM +0800, zhang bo wrote:
 On 2015/6/10 16:39, Vasiliy Tolstov wrote:
 
  2015-06-10 11:37 GMT+03:00 Daniel P. Berrange berra...@redhat.com:
  The udev rules are really something the OS vendor should setup, so
  that it just works
  
  
  I think so, also for vcpu hotplug this also covered by udev. May be we
  need something to hot remove memory and cpu, because in guest we need
  offline firstly.
  
 
 
 In fact ,we also have --guest option for 'virsh sevvcpus' command, which also
 uses qga commands to do the logical hotplug/unplug jobs, although udev rules 
 seems
 to cover the vcpu logical hotplug issue.
 
 virsh # help setvcpus
 .
 --guest  modify cpu state in the guest
 
 
 BTW: we didn't see OSes with udev rules for memory-hotplug-event setted by 
 vendors, 
 and adding such rules means that we have to *interfere within the guest*, It 
 seems 
 not a good option.

I was suggesting that an RFE be filed with any vendor who doesn't do it
to add this capability, not that we add udev rules ourselves.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-10 Thread zhang bo
On 2015/6/10 13:40, Vasiliy Tolstov wrote:

 2015-06-10 5:28 GMT+03:00 zhang bo oscar.zhan...@huawei.com:
 Thank you for your reply.
 Before this patch, we needed to manually online memory blocks inside the 
 guest, after dimm memory hotplug
 for most *nix OSes. (Windows guests automatically get their memory blocks 
 online after hotplugging)
 That is to say, we need to LOGICALLY hotplug memory after PHYSICAL hotplug.
 This patch did the LOGICAL part.
 With this patch, we don't need to get into the guest to manually online them 
 anymore, which is even
 impossible for most host administrators.
 
 
 As i remember this online step easy can be automate via udev rules.
 


Logically that's true, but adding udev rules means:
1 you have to get into the guest
2 you have to be familar with udev rules.

Not convenient enough compared to just calling libvirt API to do so.

-- 
Oscar
oscar.zhan...@huawei.com  

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-10 Thread zhang bo
On 2015/6/10 17:31, Daniel P. Berrange wrote:

 On Wed, Jun 10, 2015 at 10:28:08AM +0100, Daniel P. Berrange wrote:
 On Wed, Jun 10, 2015 at 05:24:50PM +0800, zhang bo wrote:
 On 2015/6/10 16:39, Vasiliy Tolstov wrote:

 2015-06-10 11:37 GMT+03:00 Daniel P. Berrange berra...@redhat.com:
 The udev rules are really something the OS vendor should setup, so
 that it just works


 I think so, also for vcpu hotplug this also covered by udev. May be we
 need something to hot remove memory and cpu, because in guest we need
 offline firstly.



 In fact ,we also have --guest option for 'virsh sevvcpus' command, which 
 also
 uses qga commands to do the logical hotplug/unplug jobs, although udev 
 rules seems
 to cover the vcpu logical hotplug issue.

 virsh # help setvcpus
 .
 --guest  modify cpu state in the guest


 BTW: we didn't see OSes with udev rules for memory-hotplug-event setted by 
 vendors, 
 and adding such rules means that we have to *interfere within the guest*, 
 It seems 
 not a good option.

 I was suggesting that an RFE be filed with any vendor who doesn't do it
 to add this capability, not that we add udev rules ourselves.
 
 Or actually, it probably is sufficient to just send a patch to the upstream
 systemd project to add the desired rule to udev. That way all Linux distros
 will inherit the feature when they update to new udev.
 

Then, here comes the question: how to deal with the guests that are already in 
use?
I think it's better to operate them at the host side without getting into the 
guest.
That's the advantage of qemu-guest-agent, why not take advantage of it?


-- 
Oscar
oscar.zhan...@huawei.com  

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-09 Thread Zhang Bo
Logically memory hotplug via guest agent, by enabling/disabling memory blocks.
The corresponding qga commands are: 'guest-get-memory-blocks',
'guest-set-memory-blocks' and 'guest-get-memory-block-info'.

detailed flow:
1 get memory block list, each member has 'phy-index', 'online' and 
'can-offline' parameters
2 get memory block size, normally 128MB or 256MB for most OSes
3 convert the target memory size to memory block number, and see if there's 
enough memory
  blocks to be set online/offline.
4 update the memory block list info, and let guest agent to set memory 
blocks online/offline.


Note that because we hotplug memory logically by online/offline MEMORY BLOCKS,
and each memory block has a size much bigger than KiB, there's a deviation
with the range of (0, block_size). block_size may be 128MB or 256MB or etc.,
it differs on different OSes.


Zhang Bo (8):
  lifecycle: add flag VIR_DOMAIN_MEM_GUEST for viDomainSetMemoryFlags
  qemu: agent: define structure of qemuAgentMemblockInfo
  qemu: agent: implement qemuAgentGetMemblocks
  qemu: agent: implement qemuAgentGetMemblockGeneralInfo
  qemu: agent: implement qemuAgentUpdateMemblocks
  qemu: agent: implement function qemuAgetSetMemblocks
  qemu: memory: logically hotplug memory with guest agent
  virsh: support memory hotplug with guest agent in virsh

 include/libvirt/libvirt-domain.h |   1 +
 src/libvirt-domain.c |   7 +
 src/qemu/qemu_agent.c| 307 +++
 src/qemu/qemu_agent.h|  22 +++
 src/qemu/qemu_driver.c   |  46 +-
 tools/virsh-domain.c |  10 +-
 tools/virsh.pod  |   7 +-
 7 files changed, 396 insertions(+), 4 deletions(-)

-- 
1.7.12.4


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-09 Thread Daniel P. Berrange
On Tue, Jun 09, 2015 at 05:33:24PM +0800, Zhang Bo wrote:
 Logically memory hotplug via guest agent, by enabling/disabling memory blocks.
 The corresponding qga commands are: 'guest-get-memory-blocks',
 'guest-set-memory-blocks' and 'guest-get-memory-block-info'.
 
 detailed flow:
 1 get memory block list, each member has 'phy-index', 'online' and 
 'can-offline' parameters
 2 get memory block size, normally 128MB or 256MB for most OSes
 3 convert the target memory size to memory block number, and see if 
 there's enough memory
   blocks to be set online/offline.
 4 update the memory block list info, and let guest agent to set memory 
 blocks online/offline.
 
 
 Note that because we hotplug memory logically by online/offline MEMORY BLOCKS,
 and each memory block has a size much bigger than KiB, there's a deviation
 with the range of (0, block_size). block_size may be 128MB or 256MB or etc.,
 it differs on different OSes.

So thre's alot of questions about this feature that are unclear to me..

This appears to be entirely operating via guest agent commands. How
does this then correspond to increased/decreased allocation in the host
side QEMU ? What are the upper/lower bounds on adding/removing blocks.
eg what prevents a malicous guest from asking for more memory to be
added too itself than we wish to allow ? How is this better / worse than
adjusting memory via the balloon driver ? How does this relate to the
recently added DIMM hot add/remove feature on the host side, if at all ?
Are the changes made synchronously or asynchronously - ie does the
API block while the guest OS releases the memory from the blocks that
re released, or is it totally in the backgrond like the balloon driver..

On a design POV, we're reusing the existing virDomainSetMemory API but
adding a restriction that it has to be in multiples of the block size,
which the mgmt app has no way of knowing upfront. It feels like this is
information we need to be able to expose to the app in some manner.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-09 Thread Peter Krempa
On Tue, Jun 09, 2015 at 11:05:16 +0100, Daniel Berrange wrote:
 On Tue, Jun 09, 2015 at 05:33:24PM +0800, Zhang Bo wrote:
  Logically memory hotplug via guest agent, by enabling/disabling memory 
  blocks.
  The corresponding qga commands are: 'guest-get-memory-blocks',
  'guest-set-memory-blocks' and 'guest-get-memory-block-info'.
  
  detailed flow:
  1 get memory block list, each member has 'phy-index', 'online' and 
  'can-offline' parameters
  2 get memory block size, normally 128MB or 256MB for most OSes
  3 convert the target memory size to memory block number, and see if 
  there's enough memory
blocks to be set online/offline.
  4 update the memory block list info, and let guest agent to set memory 
  blocks online/offline.
  
  
  Note that because we hotplug memory logically by online/offline MEMORY 
  BLOCKS,
  and each memory block has a size much bigger than KiB, there's a deviation
  with the range of (0, block_size). block_size may be 128MB or 256MB or etc.,
  it differs on different OSes.
 
 So thre's alot of questions about this feature that are unclear to me..
 
 This appears to be entirely operating via guest agent commands. How
 does this then correspond to increased/decreased allocation in the host
 side QEMU ? What are the upper/lower bounds on adding/removing blocks.
 eg what prevents a malicous guest from asking for more memory to be
 added too itself than we wish to allow ? How is this better / worse than
 adjusting memory via the balloon driver ? How does this relate to the

There are two possibilities where this could be advantageous:

1) This could be better than ballooning (given that it would actually
return the memory to the host, which it doesn't) since you probably
will be able to disable memory regions in certain NUMA nodes which is
not possible with the current balloon driver (memory is taken randomly).

2) The guest OS sometimes needs to enable the memory region after ACPI
memory hotplug. The GA would be able to online such memory. For this
option we don't need to go through a different API though since it can
be compounded using a flag.

 recently added DIMM hot add/remove feature on the host side, if at all ?
 Are the changes made synchronously or asynchronously - ie does the
 API block while the guest OS releases the memory from the blocks that
 re released, or is it totally in the backgrond like the balloon driver..
 
 On a design POV, we're reusing the existing virDomainSetMemory API but
 adding a restriction that it has to be in multiples of the block size,
 which the mgmt app has no way of knowing upfront. It feels like this is
 information we need to be able to expose to the app in some manner.

Since this feature would not actually release any host resources in
contrast with agent based vCPU unplug I don't think it's worth exposing
the memory region manipulation APIs via libvirt.

Only sane way I can think of is to use it to enable the memory regions
after hotplug.

Peter


signature.asc
Description: Digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-09 Thread Daniel P. Berrange
On Tue, Jun 09, 2015 at 01:22:49PM +0200, Peter Krempa wrote:
 On Tue, Jun 09, 2015 at 11:05:16 +0100, Daniel Berrange wrote:
  On Tue, Jun 09, 2015 at 05:33:24PM +0800, Zhang Bo wrote:
   Logically memory hotplug via guest agent, by enabling/disabling memory 
   blocks.
   The corresponding qga commands are: 'guest-get-memory-blocks',
   'guest-set-memory-blocks' and 'guest-get-memory-block-info'.
   
   detailed flow:
   1 get memory block list, each member has 'phy-index', 'online' and 
   'can-offline' parameters
   2 get memory block size, normally 128MB or 256MB for most OSes
   3 convert the target memory size to memory block number, and see if 
   there's enough memory
 blocks to be set online/offline.
   4 update the memory block list info, and let guest agent to set 
   memory blocks online/offline.
   
   
   Note that because we hotplug memory logically by online/offline MEMORY 
   BLOCKS,
   and each memory block has a size much bigger than KiB, there's a deviation
   with the range of (0, block_size). block_size may be 128MB or 256MB or 
   etc.,
   it differs on different OSes.
  
  So thre's alot of questions about this feature that are unclear to me..
  
  This appears to be entirely operating via guest agent commands. How
  does this then correspond to increased/decreased allocation in the host
  side QEMU ? What are the upper/lower bounds on adding/removing blocks.
  eg what prevents a malicous guest from asking for more memory to be
  added too itself than we wish to allow ? How is this better / worse than
  adjusting memory via the balloon driver ? How does this relate to the
 
 There are two possibilities where this could be advantageous:
 
 1) This could be better than ballooning (given that it would actually
 return the memory to the host, which it doesn't) since you probably
 will be able to disable memory regions in certain NUMA nodes which is
 not possible with the current balloon driver (memory is taken randomly).
 
 2) The guest OS sometimes needs to enable the memory region after ACPI
 memory hotplug. The GA would be able to online such memory. For this
 option we don't need to go through a different API though since it can
 be compounded using a flag.

So, are you saying that we should not be adding this to the
virDomainSetMemory API as done in this series, and we should
instead be able to request automatic enabling/disabling of the
regions when we do the original DIMM hotplug ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-09 Thread Peter Krempa
On Tue, Jun 09, 2015 at 12:46:27 +0100, Daniel Berrange wrote:
 On Tue, Jun 09, 2015 at 01:22:49PM +0200, Peter Krempa wrote:
  On Tue, Jun 09, 2015 at 11:05:16 +0100, Daniel Berrange wrote:
   On Tue, Jun 09, 2015 at 05:33:24PM +0800, Zhang Bo wrote:
Logically memory hotplug via guest agent, by enabling/disabling memory 
blocks.
The corresponding qga commands are: 'guest-get-memory-blocks',
'guest-set-memory-blocks' and 'guest-get-memory-block-info'.

detailed flow:
1 get memory block list, each member has 'phy-index', 'online' and 
'can-offline' parameters
2 get memory block size, normally 128MB or 256MB for most OSes
3 convert the target memory size to memory block number, and see if 
there's enough memory
  blocks to be set online/offline.
4 update the memory block list info, and let guest agent to set 
memory blocks online/offline.


Note that because we hotplug memory logically by online/offline MEMORY 
BLOCKS,
and each memory block has a size much bigger than KiB, there's a 
deviation
with the range of (0, block_size). block_size may be 128MB or 256MB or 
etc.,
it differs on different OSes.
   
   So thre's alot of questions about this feature that are unclear to me..
   
   This appears to be entirely operating via guest agent commands. How
   does this then correspond to increased/decreased allocation in the host
   side QEMU ? What are the upper/lower bounds on adding/removing blocks.
   eg what prevents a malicous guest from asking for more memory to be
   added too itself than we wish to allow ? How is this better / worse than
   adjusting memory via the balloon driver ? How does this relate to the
  
  There are two possibilities where this could be advantageous:
  
  1) This could be better than ballooning (given that it would actually
  return the memory to the host, which it doesn't) since you probably
  will be able to disable memory regions in certain NUMA nodes which is
  not possible with the current balloon driver (memory is taken randomly).
  
  2) The guest OS sometimes needs to enable the memory region after ACPI
  memory hotplug. The GA would be able to online such memory. For this
  option we don't need to go through a different API though since it can
  be compounded using a flag.
 
 So, are you saying that we should not be adding this to the
 virDomainSetMemory API as done in this series, and we should
 instead be able to request automatic enabling/disabling of the
 regions when we do the original DIMM hotplug ?

Well, that's the only place where using the memory region GA apis would
make sense for libvirt.

Whether we should do it is not that clear. Windows does online the
regions automatically and I was told that some linux distros do it via
udev rules.

Peter


signature.asc
Description: Digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-09 Thread Daniel P. Berrange
On Tue, Jun 09, 2015 at 02:03:13PM +0200, Peter Krempa wrote:
 On Tue, Jun 09, 2015 at 12:46:27 +0100, Daniel Berrange wrote:
  On Tue, Jun 09, 2015 at 01:22:49PM +0200, Peter Krempa wrote:
   On Tue, Jun 09, 2015 at 11:05:16 +0100, Daniel Berrange wrote:
On Tue, Jun 09, 2015 at 05:33:24PM +0800, Zhang Bo wrote:
 Logically memory hotplug via guest agent, by enabling/disabling 
 memory blocks.
 The corresponding qga commands are: 'guest-get-memory-blocks',
 'guest-set-memory-blocks' and 'guest-get-memory-block-info'.
 
 detailed flow:
 1 get memory block list, each member has 'phy-index', 'online' 
 and 'can-offline' parameters
 2 get memory block size, normally 128MB or 256MB for most OSes
 3 convert the target memory size to memory block number, and see 
 if there's enough memory
   blocks to be set online/offline.
 4 update the memory block list info, and let guest agent to set 
 memory blocks online/offline.
 
 
 Note that because we hotplug memory logically by online/offline 
 MEMORY BLOCKS,
 and each memory block has a size much bigger than KiB, there's a 
 deviation
 with the range of (0, block_size). block_size may be 128MB or 256MB 
 or etc.,
 it differs on different OSes.

So thre's alot of questions about this feature that are unclear to me..

This appears to be entirely operating via guest agent commands. How
does this then correspond to increased/decreased allocation in the host
side QEMU ? What are the upper/lower bounds on adding/removing blocks.
eg what prevents a malicous guest from asking for more memory to be
added too itself than we wish to allow ? How is this better / worse than
adjusting memory via the balloon driver ? How does this relate to the
   
   There are two possibilities where this could be advantageous:
   
   1) This could be better than ballooning (given that it would actually
   return the memory to the host, which it doesn't) since you probably
   will be able to disable memory regions in certain NUMA nodes which is
   not possible with the current balloon driver (memory is taken randomly).
   
   2) The guest OS sometimes needs to enable the memory region after ACPI
   memory hotplug. The GA would be able to online such memory. For this
   option we don't need to go through a different API though since it can
   be compounded using a flag.
  
  So, are you saying that we should not be adding this to the
  virDomainSetMemory API as done in this series, and we should
  instead be able to request automatic enabling/disabling of the
  regions when we do the original DIMM hotplug ?
 
 Well, that's the only place where using the memory region GA apis would
 make sense for libvirt.
 
 Whether we should do it is not that clear. Windows does online the
 regions automatically and I was told that some linux distros do it via
 udev rules.

What do we do in the case of hotunplug currently ? Are we expectig the
guest admin to have manually offlined the regions before doing hotunplug
on the host ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-09 Thread Peter Krempa
On Tue, Jun 09, 2015 at 13:05:35 +0100, Daniel Berrange wrote:
 On Tue, Jun 09, 2015 at 02:03:13PM +0200, Peter Krempa wrote:
  On Tue, Jun 09, 2015 at 12:46:27 +0100, Daniel Berrange wrote:
   On Tue, Jun 09, 2015 at 01:22:49PM +0200, Peter Krempa wrote:
On Tue, Jun 09, 2015 at 11:05:16 +0100, Daniel Berrange wrote:
 On Tue, Jun 09, 2015 at 05:33:24PM +0800, Zhang Bo wrote:


...

2) The guest OS sometimes needs to enable the memory region after ACPI
memory hotplug. The GA would be able to online such memory. For this
option we don't need to go through a different API though since it can
be compounded using a flag.
   
   So, are you saying that we should not be adding this to the
   virDomainSetMemory API as done in this series, and we should
   instead be able to request automatic enabling/disabling of the
   regions when we do the original DIMM hotplug ?
  
  Well, that's the only place where using the memory region GA apis would
  make sense for libvirt.
  
  Whether we should do it is not that clear. Windows does online the
  regions automatically and I was told that some linux distros do it via
  udev rules.
 
 What do we do in the case of hotunplug currently ? Are we expectig the
 guest admin to have manually offlined the regions before doing hotunplug
 on the host ?

You don't need to offline them prior to unplug. The guest OS handles
that automatically when it receives the request.


signature.asc
Description: Digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-09 Thread Daniel P. Berrange
On Tue, Jun 09, 2015 at 02:12:39PM +0200, Peter Krempa wrote:
 On Tue, Jun 09, 2015 at 13:05:35 +0100, Daniel Berrange wrote:
  On Tue, Jun 09, 2015 at 02:03:13PM +0200, Peter Krempa wrote:
   On Tue, Jun 09, 2015 at 12:46:27 +0100, Daniel Berrange wrote:
On Tue, Jun 09, 2015 at 01:22:49PM +0200, Peter Krempa wrote:
 On Tue, Jun 09, 2015 at 11:05:16 +0100, Daniel Berrange wrote:
  On Tue, Jun 09, 2015 at 05:33:24PM +0800, Zhang Bo wrote:
 
 
 ...
 
 2) The guest OS sometimes needs to enable the memory region after ACPI
 memory hotplug. The GA would be able to online such memory. For this
 option we don't need to go through a different API though since it can
 be compounded using a flag.

So, are you saying that we should not be adding this to the
virDomainSetMemory API as done in this series, and we should
instead be able to request automatic enabling/disabling of the
regions when we do the original DIMM hotplug ?
   
   Well, that's the only place where using the memory region GA apis would
   make sense for libvirt.
   
   Whether we should do it is not that clear. Windows does online the
   regions automatically and I was told that some linux distros do it via
   udev rules.
  
  What do we do in the case of hotunplug currently ? Are we expectig the
  guest admin to have manually offlined the regions before doing hotunplug
  on the host ?
 
 You don't need to offline them prior to unplug. The guest OS handles
 that automatically when it receives the request.

Hmm, so if the guest can offline and online DIMMS automatically on
hotplug/unplug, then I'm puzzelled what value this patch series
really adds.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-09 Thread zhang bo
On 2015/6/9 20:47, Daniel P. Berrange wrote:

 On Tue, Jun 09, 2015 at 02:12:39PM +0200, Peter Krempa wrote:
 On Tue, Jun 09, 2015 at 13:05:35 +0100, Daniel Berrange wrote:
 On Tue, Jun 09, 2015 at 02:03:13PM +0200, Peter Krempa wrote:
 On Tue, Jun 09, 2015 at 12:46:27 +0100, Daniel Berrange wrote:
 On Tue, Jun 09, 2015 at 01:22:49PM +0200, Peter Krempa wrote:
 On Tue, Jun 09, 2015 at 11:05:16 +0100, Daniel Berrange wrote:
 On Tue, Jun 09, 2015 at 05:33:24PM +0800, Zhang Bo wrote:


 ...

 2) The guest OS sometimes needs to enable the memory region after ACPI
 memory hotplug. The GA would be able to online such memory. For this
 option we don't need to go through a different API though since it can
 be compounded using a flag.

 So, are you saying that we should not be adding this to the
 virDomainSetMemory API as done in this series, and we should
 instead be able to request automatic enabling/disabling of the
 regions when we do the original DIMM hotplug ?

 Well, that's the only place where using the memory region GA apis would
 make sense for libvirt.

 Whether we should do it is not that clear. Windows does online the
 regions automatically and I was told that some linux distros do it via
 udev rules.

 What do we do in the case of hotunplug currently ? Are we expectig the
 guest admin to have manually offlined the regions before doing hotunplug
 on the host ?

 You don't need to offline them prior to unplug. The guest OS handles
 that automatically when it receives the request.
 
 Hmm, so if the guest can offline and online DIMMS automatically on
 hotplug/unplug, then I'm puzzelled what value this patch series
 really adds.
 
 
 Regards,
 Daniel


Thank you for your reply.
Before this patch, we needed to manually online memory blocks inside the guest, 
after dimm memory hotplug
for most *nix OSes. (Windows guests automatically get their memory blocks 
online after hotplugging)
That is to say, we need to LOGICALLY hotplug memory after PHYSICAL hotplug. 
This patch did the LOGICAL part.
With this patch, we don't need to get into the guest to manually online them 
anymore, which is even 
impossible for most host administrators.



-- 
Oscar
oscar.zhan...@huawei.com  

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8] logically memory hotplug via guest agent

2015-06-09 Thread Vasiliy Tolstov
2015-06-10 5:28 GMT+03:00 zhang bo oscar.zhan...@huawei.com:
 Thank you for your reply.
 Before this patch, we needed to manually online memory blocks inside the 
 guest, after dimm memory hotplug
 for most *nix OSes. (Windows guests automatically get their memory blocks 
 online after hotplugging)
 That is to say, we need to LOGICALLY hotplug memory after PHYSICAL hotplug.
 This patch did the LOGICAL part.
 With this patch, we don't need to get into the guest to manually online them 
 anymore, which is even
 impossible for most host administrators.


As i remember this online step easy can be automate via udev rules.

-- 
Vasiliy Tolstov,
e-mail: v.tols...@selfip.ru

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list