Re: [libvirt] libnuma build failure [was: securityselinuxlabeltest test fails on v1.2.5]

2015-04-09 Thread Zhi Yong Wu
HI

The issue still exists if the upstream src is built on RH 6.x. After
numactl and numactl-devel is upgraded to 2.0.9, the issue gets fixed.


On Tue, Jul 1, 2014 at 5:26 AM, Eric Blake ebl...@redhat.com wrote:
 On 06/30/2014 01:46 PM, Scott Sullivan wrote:
 I've tested the v1.2.6-rc2 git tag, im getting this build error:

   CC util/libvirt_util_la-virnuma.lo
 util/virnuma.c: In function 'virNumaNodeIsAvailable':
 util/virnuma.c:428: error: 'numa_nodes_ptr' undeclared (first use in
 this function)
 util/virnuma.c:428: error: (Each undeclared identifier is reported only
 once
 util/virnuma.c:428: error: for each function it appears in.)

 What version of numactl-devel do you have installed?

 make[3]: *** [util/libvirt_util_la-virnuma.lo] Error 1
 make[3]: Leaving directory `/home/rpmbuild/packages/libvirt/src'
 make[2]: *** [all] Error 2
 make[2]: Leaving directory `/home/rpmbuild/packages/libvirt/src'
 make[1]: *** [all-recursive] Error 1
 make[1]: Leaving directory `/home/rpmbuild/packages/libvirt'
 make: *** [all] Error 2
 error: Bad exit status from /var/tmp/rpm-tmp.3Gu7nd (%build)

 Build works fine on tag v1.2.5-maint.

 Sounds like we need to make code added in the meantime be conditional to
 compile to older numa libraries.

 --
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org


 --
 libvir-list mailing list
 libvir-list@redhat.com
 https://www.redhat.com/mailman/listinfo/libvir-list



-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] error: negative width in bit-field '_gl_verify_error_if_negative' ?

2015-04-01 Thread Zhi Yong Wu
HI,

Does anyone hit this issue when compiling libvirt?

  CCLD libvirt.la
  CC   libvirt_qemu_la-libvirt-qemu.lo
  CCLD libvirt-qemu.la
  CC   libvirt_lxc_la-libvirt-lxc.lo
  CCLD libvirt-lxc.la
  CC   lockd_la-lock_driver_lockd.lo
  CC   lockd_la-lock_protocol.lo
  CCLD lockd.la
  CC   libvirt_driver_qemu_impl_la-qemu_agent.lo
  CC   libvirt_driver_qemu_impl_la-qemu_capabilities.lo
qemu/qemu_capabilities.c:55: error: negative width in bit-field
'_gl_verify_error_if_negative'
make[3]: *** [libvirt_driver_qemu_impl_la-qemu_capabilities.lo] Error 1
make[3]: Leaving directory `/home/zhiyong.wzy/libvirt-source/src'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/home/zhiyong.wzy/libvirt-source/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/zhiyong.wzy/libvirt-source'
make: *** [all] Error 2


-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] error: negative width in bit-field '_gl_verify_error_if_negative' ?

2015-04-01 Thread Zhi Yong Wu
got fixed, thanks.

On Thu, Apr 2, 2015 at 12:16 AM, Eric Blake ebl...@redhat.com wrote:
 On 04/01/2015 03:38 AM, Zhi Yong Wu wrote:
 HI,

 Does anyone hit this issue when compiling libvirt?

   CCLD libvirt.la
   CC   libvirt_qemu_la-libvirt-qemu.lo
   CCLD libvirt-qemu.la
   CC   libvirt_lxc_la-libvirt-lxc.lo
   CCLD libvirt-lxc.la
   CC   lockd_la-lock_driver_lockd.lo
   CC   lockd_la-lock_protocol.lo
   CCLD lockd.la
   CC   libvirt_driver_qemu_impl_la-qemu_agent.lo
   CC   libvirt_driver_qemu_impl_la-qemu_capabilities.lo
 qemu/qemu_capabilities.c:55: error: negative width in bit-field
 '_gl_verify_error_if_negative'

 Usually, this means you changed something in code that requires a
 parallel change somewhere else to keep array sizes in sync, or whatever
 else the compile-time assertion is checking for.  This particular error
 is part of VIR_ENUM_IMPL, which means you probably added a capability in
 qemu_capabilities.h but forgot to associate it with a string here.

 --
 Eric Blake   eblake redhat com+1-919-301-3266
 Libvirt virtualization library http://libvirt.org




-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC PATCH v2 00/12] qemu: add support to hot-plug/unplug cpu device

2015-03-24 Thread Zhi Yong Wu
HI,

Do you have plan to update this patchset based on the comments
recently or have the latest one to post?

I noticed that the patchset for memory hotplug had got merged, is
there any plan to merge this patchset?


On Fri, Feb 13, 2015 at 12:13 AM, Peter Krempa pkre...@redhat.com wrote:
 On Thu, Feb 12, 2015 at 19:53:13 +0800, Zhu Guihua wrote:
 Hi all,

 Any comments about this series?

 I'm not sure whether this series' method to support cpu hotplug in
 libvirt is reasonable, so could anyone give more suggestions about this
 function? Thanks.

 Well, as Dan pointed out in his review for this series and previous
 version, we are not convinced that we need a way to specify the CPU
 model and other parameters as having dissimilar CPUs doesn't make sense.

 A solution is either to build the cpu plug on top of the existing API or
 provide enough information to convince us that having the cpu model in
 the XML actually makes sense.


 Regards,
 Zhu

 Peter

 --
 libvir-list mailing list
 libvir-list@redhat.com
 https://www.redhat.com/mailman/listinfo/libvir-list



-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC 0/5] block: File descriptor passing using -open-hook-fd

2012-05-17 Thread Zhi Yong Wu
On Thu, May 17, 2012 at 9:42 PM, Stefan Hajnoczi
stefa...@linux.vnet.ibm.com wrote:
 On Fri, May 04, 2012 at 11:28:47AM +0800, Zhi Yong Wu wrote:
 On Tue, May 1, 2012 at 11:31 PM, Stefan Hajnoczi
 stefa...@linux.vnet.ibm.com wrote:
  Libvirt can take advantage of SELinux to restrict the QEMU process and 
  prevent
  it from opening files that it should not have access to.  This improves
  security because it prevents the attacker from escaping the QEMU process if
  they manage to gain control.
 
  NFS has been a pain point for SELinux because it does not support labels 
  (which
  I believe are stored in extended attributes).  In other words, it's not
  possible to use SELinux goodness on QEMU when image files are located on 
  NFS.
  Today we have to allow QEMU access to any file on the NFS export rather 
  than
  restricting specifically to the image files that the guest requires.
 
  File descriptor passing is a solution to this problem and might also come 
  in
  handy elsewhere.  Libvirt or another external process chooses files which 
  QEMU
  is allowed to access and provides just those file descriptors - QEMU cannot
  open the files itself.
 
  This series adds the -open-hook-fd command-line option.  Whenever QEMU 
  needs to
  open an image file it sends a request over the given UNIX domain socket.  
  The
  response includes the file descriptor or an errno on failure.  Please see 
  the
  patches for details on the protocol.
 
  The -open-hook-fd approach allows QEMU to support file descriptor passing
  without changing -drive.  It also supports snapshot_blkdev and other 
  commands
 By the way, How will it support them?

 The problem with snapshot_blkdev is that closing a file and opening a
 new file cannot be done by the QEMU process when an SELinux policy is in
 place to prevent opening files.

 The -open-hook-fd approach works even when the QEMU process is not
 allowed to open files since file descriptor passing over a UNIX domain
 socket is used to open files on behalf of QEMU.
Do you mean that libvirt will provide QEMU with one service? When QEMU
need open or close one new file, it can send one request to libvirt?

 Stefan




-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC 0/5] block: File descriptor passing using -open-hook-fd

2012-05-17 Thread Zhi Yong Wu
On Thu, May 17, 2012 at 9:42 PM, Stefan Hajnoczi
stefa...@linux.vnet.ibm.com wrote:
 On Fri, May 04, 2012 at 11:28:47AM +0800, Zhi Yong Wu wrote:
 On Tue, May 1, 2012 at 11:31 PM, Stefan Hajnoczi
 stefa...@linux.vnet.ibm.com wrote:
  Libvirt can take advantage of SELinux to restrict the QEMU process and 
  prevent
  it from opening files that it should not have access to.  This improves
  security because it prevents the attacker from escaping the QEMU process if
  they manage to gain control.
 
  NFS has been a pain point for SELinux because it does not support labels 
  (which
  I believe are stored in extended attributes).  In other words, it's not
  possible to use SELinux goodness on QEMU when image files are located on 
  NFS.
  Today we have to allow QEMU access to any file on the NFS export rather 
  than
  restricting specifically to the image files that the guest requires.
 
  File descriptor passing is a solution to this problem and might also come 
  in
  handy elsewhere.  Libvirt or another external process chooses files which 
  QEMU
  is allowed to access and provides just those file descriptors - QEMU cannot
  open the files itself.
 
  This series adds the -open-hook-fd command-line option.  Whenever QEMU 
  needs to
  open an image file it sends a request over the given UNIX domain socket.  
  The
  response includes the file descriptor or an errno on failure.  Please see 
  the
  patches for details on the protocol.
 
  The -open-hook-fd approach allows QEMU to support file descriptor passing
  without changing -drive.  It also supports snapshot_blkdev and other 
  commands
 By the way, How will it support them?

 The problem with snapshot_blkdev is that closing a file and opening a
 new file cannot be done by the QEMU process when an SELinux policy is in
 place to prevent opening files.

 The -open-hook-fd approach works even when the QEMU process is not
 allowed to open files since file descriptor passing over a UNIX domain
 socket is used to open files on behalf of QEMU.
I thought that the patchset can only let QEMU passively get passed fd
parameter from upper application.

 Stefan




-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC 0/5] block: File descriptor passing using -open-hook-fd

2012-05-03 Thread Zhi Yong Wu
On Tue, May 1, 2012 at 11:31 PM, Stefan Hajnoczi
stefa...@linux.vnet.ibm.com wrote:
 Libvirt can take advantage of SELinux to restrict the QEMU process and prevent
 it from opening files that it should not have access to.  This improves
 security because it prevents the attacker from escaping the QEMU process if
 they manage to gain control.

 NFS has been a pain point for SELinux because it does not support labels 
 (which
 I believe are stored in extended attributes).  In other words, it's not
 possible to use SELinux goodness on QEMU when image files are located on NFS.
 Today we have to allow QEMU access to any file on the NFS export rather than
 restricting specifically to the image files that the guest requires.

 File descriptor passing is a solution to this problem and might also come in
 handy elsewhere.  Libvirt or another external process chooses files which QEMU
 is allowed to access and provides just those file descriptors - QEMU cannot
 open the files itself.

 This series adds the -open-hook-fd command-line option.  Whenever QEMU needs 
 to
 open an image file it sends a request over the given UNIX domain socket.  The
 response includes the file descriptor or an errno on failure.  Please see the
 patches for details on the protocol.

 The -open-hook-fd approach allows QEMU to support file descriptor passing
 without changing -drive.  It also supports snapshot_blkdev and other commands
By the way, How will it support them?
 that re-open image files.

 Anthony Liguori aligu...@us.ibm.com wrote most of these patches.  I added a
 demo -open-hook-fd server and added some small fixes.  Since Anthony is
 traveling right now I'm sending the RFC for discussion.

 Anthony Liguori (3):
  block: add open() wrapper that can be hooked by libvirt
  block: add new command line parameter that and protocol description
  block: plumb up open-hook-fd option

 Stefan Hajnoczi (2):
  osdep: add qemu_recvmsg() wrapper
  Example -open-hook-fd server

  block.c           |  107 ++
  block.h           |    2 +
  block/raw-posix.c |   18 +++
  block/raw-win32.c |    2 +-
  block/vdi.c       |    2 +-
  block/vmdk.c      |    6 +--
  block/vpc.c       |    2 +-
  block/vvfat.c     |    4 +-
  block_int.h       |   12 +
  osdep.c           |   46 +
  qemu-common.h     |    2 +
  qemu-options.hx   |   42 +++
  test-fd-passing.c |  147 
 +
  vl.c              |    3 ++
  14 files changed, 378 insertions(+), 17 deletions(-)
  create mode 100644 test-fd-passing.c

 --
 1.7.10

 --
 libvir-list mailing list
 libvir-list@redhat.com
 https://www.redhat.com/mailman/listinfo/libvir-list



-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCHv6 0/7] block io throttle via per-device block IO tuning

2011-11-30 Thread Zhi Yong Wu
Eric,

You're very nice, and thanks a lot in my heart. ;)

On Thu, Dec 1, 2011 at 3:03 AM, Eric Blake ebl...@redhat.com wrote:
 On 11/24/2011 01:32 AM, Daniel Veillard wrote:
 On Wed, Nov 23, 2011 at 02:44:42PM -0700, Eric Blake wrote:
 Here's the latest state of Lei's patch series with all my comments
 folded in.  I may have a few more tweaks to make now that I'm at
 the point of testing things with both old qemu (graceful rejection)
 and new qemu (sensical return values), so it may be a day or two (or
 even a weekend, since this is a holiday weekend for me) before I
 actually push this, so it wouldn't hurt if anyone else wants to
 review in the meantime.


   ACK to the serie once the bug on 3/7 is fixed, if possible fix the
 couple of other small nits :-)

 Nits fixed and complete series now pushed.

 --
 Eric Blake   ebl...@redhat.com    +1-919-301-3266
 Libvirt virtualization library http://libvirt.org


 --
 libvir-list mailing list
 libvir-list@redhat.com
 https://www.redhat.com/mailman/listinfo/libvir-list



-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8 v5] Summary on block IO throttle

2011-11-22 Thread Zhi Yong Wu
On Wed, Nov 23, 2011 at 12:27 PM, Eric Blake ebl...@redhat.com wrote:
 On 11/15/2011 02:02 AM, Lei Li wrote:
 Changes since V3
  - Use virTypedParameterPtr instead of specific struct in libvirt pulic API.
  - Relevant changes to remote driver, qemu driver, python support and virsh.

 To help add QEMU I/O throttling support to libvirt, we plan to complete
 it with add new API virDomain{Set, Get}BlockIoThrottle(), new command 
 'blkdeviotune'
 and Python bindings.

 Notes: Now all the planed features were implemented (#1#2 were implemented by
 Zhi Yong Wu), the previous comments were all fixed up too. And the qemu part 
 patches
 have been accepted upstream and are expected to be part of the QEMU 1.1
 release, git tree from Zhi Yong:

 http://repo.or.cz/w/qemu/kevin.git/shortlog/refs/heads/block

 I just realized that the block_set_io_throttle monitor command is not
 yet in released upstream qemu.  I will continue reviewing this series,
 but it's harder to justify including it into libvirt 0.9.8 unless we
 have a good feel that the design for qemu is stable and unlikely to
 undergo further modifications.  We have included libvirt changes for
 unreleased qemu features in the past, but only after pointing to qemu
 list archives documenting that there is large consensus for the proposed
 design, and that any further changes prior to a qemu release containing
 the design will be internal only.
As you've known, qemu 1.1 development branch will be created after
qemu 1.0 is announced, and this date is about on 1st Dec this year. So
current accepted block features are all kept in kevin's block git
tree.

 --
 Eric Blake   ebl...@redhat.com    +1-919-301-3266
 Libvirt virtualization library http://libvirt.org


 --
 libvir-list mailing list
 libvir-list@redhat.com
 https://www.redhat.com/mailman/listinfo/libvir-list




-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH 0/8 v5] Summary on block IO throttle

2011-11-22 Thread Zhi Yong Wu
On Wed, Nov 23, 2011 at 12:27 PM, Eric Blake ebl...@redhat.com wrote:
 On 11/15/2011 02:02 AM, Lei Li wrote:
 Changes since V3
  - Use virTypedParameterPtr instead of specific struct in libvirt pulic API.
  - Relevant changes to remote driver, qemu driver, python support and virsh.

 To help add QEMU I/O throttling support to libvirt, we plan to complete
 it with add new API virDomain{Set, Get}BlockIoThrottle(), new command 
 'blkdeviotune'
 and Python bindings.

 Notes: Now all the planed features were implemented (#1#2 were implemented by
 Zhi Yong Wu), the previous comments were all fixed up too. And the qemu part 
 patches
 have been accepted upstream and are expected to be part of the QEMU 1.1
 release, git tree from Zhi Yong:

 http://repo.or.cz/w/qemu/kevin.git/shortlog/refs/heads/block

 I just realized that the block_set_io_throttle monitor command is not
 yet in released upstream qemu.  I will continue reviewing this series,
 but it's harder to justify including it into libvirt 0.9.8 unless we
 have a good feel that the design for qemu is stable and unlikely to
I strongly think that the syntax of this qemu monitor command will be
not changed later. :)
Moreover, it has not been changed since the patches for qemu block I/O
throttling is published.
 undergo further modifications.  We have included libvirt changes for
 unreleased qemu features in the past, but only after pointing to qemu
 list archives documenting that there is large consensus for the proposed
 design, and that any further changes prior to a qemu release containing
 the design will be internal only.

 --
 Eric Blake   ebl...@redhat.com    +1-919-301-3266
 Libvirt virtualization library http://libvirt.org


 --
 libvir-list mailing list
 libvir-list@redhat.com
 https://www.redhat.com/mailman/listinfo/libvir-list




-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC PATCH 0/8 v3] Summary on block IO throttle

2011-11-09 Thread Zhi Yong Wu
On Thu, Nov 10, 2011 at 4:32 AM, Lei Li li...@linux.vnet.ibm.com wrote:
 Changes since V2
  - Implement the Python binding support for setting blkio throttling.
  - Implement --current --live --config options support to unify the libvirt 
 API.
  - Add changes in docs and tests.
  - Some changes suggested by Adam Litke, Eric Blake, Daniel P. Berrange.
   - Change the XML schema.
   - API name to virDomain{Set, Get}BlockIoTune.
   - Parameters changed to make them more self-explanatory.
   - virsh command name to blkdeviotune.
  - And other fixups.

 Changes since V1
  - Implement the support to get the block io throttling for
   a device as read only connection - QMP/HMP.
  - Split virDomainBlockIoThrottle into two separate functions
     virDomainSetBlockIoThrottle - Set block I/O limits for a device
      - Requires a connection in 'write' mode.
      - Limits (info) structure passed as an input parameter
     virDomainGetBlockIoThrottle - Get the current block I/O limits for a 
 device
      - Works on a read-only connection.
      - Current limits are written to the output parameter (reply).
  - And Other fixups suggested by Adam Litke, Daniel P. Berrange.
   - For dynamically allocate the blkiothrottle struct, I will fix
     it when implement --current --live --config options support.

 Today libvirt supports the cgroups blkio-controller, which handles
 proportional shares and throughput/iops limits on host block devices.
 blkio-controller does not support network file systems (NFS) or other
 QEMU remote block drivers (curl, Ceph/rbd, sheepdog) since they are
 not host block devices. QEMU I/O throttling works with all types of
 drive and can be applied independently to each drive attached to
 a guest and supports throughput/iops limits.

 To help add QEMU I/O throttling support to libvirt, we plan to complete
 it with add new API virDomain{Set, Get}BlockIoThrottle(), new command 
 'blkdeviotune'
 and Python bindings.

 Notes: Now all the planed features were implemented (#1#2 were implemented by
 Zhi Yong Wu), the previous comments were all fixed up too. And the qemu part 
 patches
 have been accepted upstream just now and are expected to be part of the QEMU 
 1.1
 release, git tree from Zhi Yong:

 http://repo.or.cz/w/qemu/kevin.git/shortlog/refs/heads/block


 1) Enable the blkio throttling in xml when guest is starting up.

 Add blkio throttling in xml as follows:

    disk type='file' device='disk'
      ...
      iotune
        total_bytes_secnnn/total_bytes_sec
        ...
      /iotune
      ...
    /disk

 2) Enable blkio throttling setting at guest running time.

 virsh blkdeviotune domain device [--total_bytes_secnumber] 
 [--read_bytes_secnumber] \
 [--write_bytes_secnumber] [--total_iops_secnumber] 
 [--read_iops_secnumber]
 [--write_iops_secnumber]

 3) The support to get the current block i/o throttling for a device - HMP/QMP.

 virsh blkiothrottle domain device
 total_bytes_sec:
 read_bytes_sec:
 write_bytes_sec:
 total_iops_sec:
 read_iops_sec:
 write_iops_sec:

 4) Python binding support for setting blkio throttling.
 5) --current --live --config options support to unify the libvirt API.

 virsh blkdeviotune domain device [--total_bytes_sec number] 
 [--read_bytes_sec number]
 [--write_bytes_sec number] [--total_iops_sec number] [--read_iops_sec 
 number]
 [--write_iops_sec number] [--config] [--live] [--current]
Thanks Li Lei for the remaining works. Below is only one reminder.
QEMU command line options have some limitations as below:

(1) global bps limit.
   -drive bps=xxxin bytes/s
(2) only read bps limit
   -drive bps_rd=xxx in bytes/s
(3) only write bps limit
   -drive bps_wr=xxx in bytes/s
(4) global iops limit
   -drive iops=xxx   in ios/s
(5) only read iops limit
   -drive iops_rd=xxxin ios/s
(6) only write iops limit
   -drive iops_wr=xxxin ios/s
(7) the combination of some limits.
   -drive bps=xxx,iops=xxx

Known Limitations:
(1) #1 can not be used with #2, #3 together
(2) #4 can not be used with #5, #6 together




  daemon/remote.c                                       |   87 +++
  docs/formatdomain.html.in                             |   30 ++
  docs/schemas/domaincommon.rng                         |   24 +
  include/libvirt/libvirt.h.in                          |   25 ++
  python/generator.py                                   |    2
  python/libvirt-override-api.xml                       |   16 +
  python/libvirt-override.c                             |   85 +++
  src/conf/domain_conf.c                                |  101 
  src/conf/domain_conf.h                                |   12
  src/driver.h                                          |   18 +
  src/libvirt.c                                         |  115 +
  src/libvirt_public.syms                               |    2
  src/qemu/qemu_command.c                               |   33 ++
  src/qemu/qemu_driver.c

Re: [libvirt] [RFC PATCH 3/5] Implement virDomainBlockIoThrottle for the qemu driver

2011-10-13 Thread Zhi Yong Wu
On Wed, Oct 12, 2011 at 8:39 PM, Adam Litke a...@us.ibm.com wrote:
 On Wed, Oct 12, 2011 at 03:02:12PM +0800, Zhi Yong Wu wrote:
 On Tue, Oct 11, 2011 at 11:19 PM, Adam Litke a...@us.ibm.com wrote:
  On Mon, Oct 10, 2011 at 09:45:11PM +0800, Lei HH Li wrote:
 
  Summary here.
 
 
  Signed-off-by: Zhi Yong Wu wu...@linux.vnet.ibm.com
  ---
   src/qemu/qemu_command.c      |   35 ++
   src/qemu/qemu_driver.c       |   54 +
   src/qemu/qemu_migration.c    |   16 +++---
   src/qemu/qemu_monitor.c      |   19 +++
   src/qemu/qemu_monitor.h      |    6 ++
   src/qemu/qemu_monitor_json.c |  107 
  ++
   src/qemu/qemu_monitor_json.h |    6 ++
   src/qemu/qemu_monitor_text.c |   61 
   src/qemu/qemu_monitor_text.h |    6 ++
   9 files changed, 302 insertions(+), 8 deletions(-)
 
  diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
  index cf99f89..c4d2938 100644
  --- a/src/qemu/qemu_command.c
  +++ b/src/qemu/qemu_command.c
  @@ -1728,6 +1728,41 @@ qemuBuildDriveStr(virDomainDiskDefPtr disk,
           }
       }
 
  +    /*block I/O throttling*/
  +    if (disk-blkiothrottle.bps || disk-blkiothrottle.bps_rd
  +        || disk-blkiothrottle.bps_wr || disk-blkiothrottle.iops
  +        || disk-blkiothrottle.iops_rd || disk-blkiothrottle.iops_wr) {
 
  The above suggests that you should dynamically allocate the blkiothrottle
  struct.  Then you could reduce this check to:
 If the structure is dynamically allocated, it will be easy to leak
 memory although the checking is reduced.

 Not using dynamic allocation because it is harder to do correctly is probably
 not the best reasoning.  There is a virDomainDiskDefFree() function to help 
 free
 dynamic memory in the disk definition.  Anyway, there are also other ways to
 clean this up.  For example, you could add another field to 
 disk-blkiothrottle
 (.enabled?) to indicate whether throttling is active.  Then you only have one
Goot idea.
 variable to check.  For the record, I still prefer using a pointer to
 blkiothrottle for this.
I only express my opinion.:) I prefer using the structure with
.enabled, not pointer. But the final implemetation depends on you and
Li Lei.

 
        if (disk-blkiothrottle) {
 
  +        if (disk-blkiothrottle.bps) {
  +            virBufferAsprintf(opt, ,bps=%llu,
  +                              disk-blkiothrottle.bps);
  +        }
  +
  +        if (disk-blkiothrottle.bps_rd) {
  +            virBufferAsprintf(opt, ,bps_wr=%llu,
  +                              disk-blkiothrottle.bps_rd);
  +        }
  +
  +        if (disk-blkiothrottle.bps_wr) {
  +            virBufferAsprintf(opt, ,bps_wr=%llu,
  +                              disk-blkiothrottle.bps_wr);
  +        }
  +
  +        if (disk-blkiothrottle.iops) {
  +            virBufferAsprintf(opt, ,iops=%llu,
  +                              disk-blkiothrottle.iops);
  +        }
  +
  +        if (disk-blkiothrottle.iops_rd) {
  +            virBufferAsprintf(opt, ,iops_rd=%llu,
  +                              disk-blkiothrottle.iops_rd);
  +        }
  +
  +        if (disk-blkiothrottle.iops_wr) {
  +            virBufferAsprintf(opt, ,iops_wr=%llu,
  +                              disk-blkiothrottle.iops_wr);
  +        }
  +    }
  +
       if (virBufferError(opt)) {
           virReportOOMError();
           goto error;
  diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
  index 5588d93..bbee9a3 100644
  --- a/src/qemu/qemu_driver.c
  +++ b/src/qemu/qemu_driver.c
  @@ -10449,6 +10449,59 @@ qemuDomainBlockPull(virDomainPtr dom, const char 
  *path, unsigned long bandwidth,
       return ret;
   }
 
  +static int
  +qemuDomainBlockIoThrottle(virDomainPtr dom,
  +                          const char *disk,
  +                          virDomainBlockIoThrottleInfoPtr info,
  +                          virDomainBlockIoThrottleInfoPtr reply,
  +                          unsigned int flags)
  +{
  +    struct qemud_driver *driver = dom-conn-privateData;
  +    virDomainObjPtr vm = NULL;
  +    qemuDomainObjPrivatePtr priv;
  +    char uuidstr[VIR_UUID_STRING_BUFLEN];
  +    const char *device = NULL;
  +    int ret = -1;
  +
  +    qemuDriverLock(driver);
  +    virUUIDFormat(dom-uuid, uuidstr);
  +    vm = virDomainFindByUUID(driver-domains, dom-uuid);
  +    if (!vm) {
  +        qemuReportError(VIR_ERR_NO_DOMAIN,
  +                        _(no domain with matching uuid '%s'), uuidstr);
  +        goto cleanup;
  +    }
  +
  +    if (!virDomainObjIsActive(vm)) {
  +        qemuReportError(VIR_ERR_OPERATION_INVALID,
  +                        %s, _(domain is not running));
  +        goto cleanup;
  +    }
  +
  +    device = qemuDiskPathToAlias(vm, disk);
  +    if (!device) {
  +        goto cleanup;
  +    }
  +
  +    if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY)  0)
  +        goto cleanup

Re: [libvirt] [RFC PATCH 3/5] Implement virDomainBlockIoThrottle for the qemu driver

2011-10-12 Thread Zhi Yong Wu
On Tue, Oct 11, 2011 at 11:19 PM, Adam Litke a...@us.ibm.com wrote:
 On Mon, Oct 10, 2011 at 09:45:11PM +0800, Lei HH Li wrote:

 Summary here.


 Signed-off-by: Zhi Yong Wu wu...@linux.vnet.ibm.com
 ---
  src/qemu/qemu_command.c      |   35 ++
  src/qemu/qemu_driver.c       |   54 +
  src/qemu/qemu_migration.c    |   16 +++---
  src/qemu/qemu_monitor.c      |   19 +++
  src/qemu/qemu_monitor.h      |    6 ++
  src/qemu/qemu_monitor_json.c |  107 
 ++
  src/qemu/qemu_monitor_json.h |    6 ++
  src/qemu/qemu_monitor_text.c |   61 
  src/qemu/qemu_monitor_text.h |    6 ++
  9 files changed, 302 insertions(+), 8 deletions(-)

 diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
 index cf99f89..c4d2938 100644
 --- a/src/qemu/qemu_command.c
 +++ b/src/qemu/qemu_command.c
 @@ -1728,6 +1728,41 @@ qemuBuildDriveStr(virDomainDiskDefPtr disk,
          }
      }

 +    /*block I/O throttling*/
 +    if (disk-blkiothrottle.bps || disk-blkiothrottle.bps_rd
 +        || disk-blkiothrottle.bps_wr || disk-blkiothrottle.iops
 +        || disk-blkiothrottle.iops_rd || disk-blkiothrottle.iops_wr) {

 The above suggests that you should dynamically allocate the blkiothrottle
 struct.  Then you could reduce this check to:
If the structure is dynamically allocated, it will be easy to leak
memory although the checking is reduced.

       if (disk-blkiothrottle) {

 +        if (disk-blkiothrottle.bps) {
 +            virBufferAsprintf(opt, ,bps=%llu,
 +                              disk-blkiothrottle.bps);
 +        }
 +
 +        if (disk-blkiothrottle.bps_rd) {
 +            virBufferAsprintf(opt, ,bps_wr=%llu,
 +                              disk-blkiothrottle.bps_rd);
 +        }
 +
 +        if (disk-blkiothrottle.bps_wr) {
 +            virBufferAsprintf(opt, ,bps_wr=%llu,
 +                              disk-blkiothrottle.bps_wr);
 +        }
 +
 +        if (disk-blkiothrottle.iops) {
 +            virBufferAsprintf(opt, ,iops=%llu,
 +                              disk-blkiothrottle.iops);
 +        }
 +
 +        if (disk-blkiothrottle.iops_rd) {
 +            virBufferAsprintf(opt, ,iops_rd=%llu,
 +                              disk-blkiothrottle.iops_rd);
 +        }
 +
 +        if (disk-blkiothrottle.iops_wr) {
 +            virBufferAsprintf(opt, ,iops_wr=%llu,
 +                              disk-blkiothrottle.iops_wr);
 +        }
 +    }
 +
      if (virBufferError(opt)) {
          virReportOOMError();
          goto error;
 diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
 index 5588d93..bbee9a3 100644
 --- a/src/qemu/qemu_driver.c
 +++ b/src/qemu/qemu_driver.c
 @@ -10449,6 +10449,59 @@ qemuDomainBlockPull(virDomainPtr dom, const char 
 *path, unsigned long bandwidth,
      return ret;
  }

 +static int
 +qemuDomainBlockIoThrottle(virDomainPtr dom,
 +                          const char *disk,
 +                          virDomainBlockIoThrottleInfoPtr info,
 +                          virDomainBlockIoThrottleInfoPtr reply,
 +                          unsigned int flags)
 +{
 +    struct qemud_driver *driver = dom-conn-privateData;
 +    virDomainObjPtr vm = NULL;
 +    qemuDomainObjPrivatePtr priv;
 +    char uuidstr[VIR_UUID_STRING_BUFLEN];
 +    const char *device = NULL;
 +    int ret = -1;
 +
 +    qemuDriverLock(driver);
 +    virUUIDFormat(dom-uuid, uuidstr);
 +    vm = virDomainFindByUUID(driver-domains, dom-uuid);
 +    if (!vm) {
 +        qemuReportError(VIR_ERR_NO_DOMAIN,
 +                        _(no domain with matching uuid '%s'), uuidstr);
 +        goto cleanup;
 +    }
 +
 +    if (!virDomainObjIsActive(vm)) {
 +        qemuReportError(VIR_ERR_OPERATION_INVALID,
 +                        %s, _(domain is not running));
 +        goto cleanup;
 +    }
 +
 +    device = qemuDiskPathToAlias(vm, disk);
 +    if (!device) {
 +        goto cleanup;
 +    }
 +
 +    if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY)  0)
 +        goto cleanup;
 +    qemuDomainObjEnterMonitorWithDriver(driver, vm);
 +    priv = vm-privateData;
 +    ret = qemuMonitorBlockIoThrottle(priv-mon, device, info, reply, flags);
 +    qemuDomainObjExitMonitorWithDriver(driver, vm);
 +    if (qemuDomainObjEndJob(driver, vm) == 0) {
 +        vm = NULL;
 +        goto cleanup;
 +    }
 +
 +cleanup:
 +    VIR_FREE(device);
 +    if (vm)
 +        virDomainObjUnlock(vm);
 +    qemuDriverUnlock(driver);
 +    return ret;
 +}
 +
  static virDriver qemuDriver = {
      .no = VIR_DRV_QEMU,
      .name = QEMU,
 @@ -10589,6 +10642,7 @@ static virDriver qemuDriver = {
      .domainGetBlockJobInfo = qemuDomainGetBlockJobInfo, /* 0.9.4 */
      .domainBlockJobSetSpeed = qemuDomainBlockJobSetSpeed, /* 0.9.4 */
      .domainBlockPull = qemuDomainBlockPull, /* 0.9.4 */
 +    .domainBlockIoThrottle = qemuDomainBlockIoThrottle, /* 0.9.4 */
  };


 diff --git a/src/qemu/qemu_migration.c b/src/qemu

Re: [libvirt] [RFC PATCH 1/5] Add new API virDomainBlockIoThrottle

2011-10-12 Thread Zhi Yong Wu
On Tue, Oct 11, 2011 at 10:59 PM, Adam Litke a...@us.ibm.com wrote:
 On Mon, Oct 10, 2011 at 09:45:09PM +0800, Lei HH Li wrote:

 Hi Lei.  You are missing a patch summary at the top of this email.  In your
 summary you want to let reviewers know what the patch is doing.  For example,
 this patch defines the new virDomainBlockIoThrottle API and specifies the XML
 schema.  Also at the top of the patch you have an opportunity to explain why 
 you
 made a particular design decision.  For example, you could explain why you 
 chose
I think so:). We should explain why we create one new libvirt
commands, not extending blkiotune.
BTW: Can we CCed these patches to those related developers to get
their comments? (e.g, Daniel, Gui JianFeng, etc)

 to represent the throttling inside the disk tag rather than alongside the
 blkiotune settings.


 Signed-off-by: Zhi Yong Wu wu...@linux.vnet.ibm.com
 ---
  include/libvirt/libvirt.h.in |   22 
  src/conf/domain_conf.c       |   77 
 ++
  src/conf/domain_conf.h       |   11 ++
  src/driver.h                 |   11 ++
  src/libvirt.c                |   66 
  src/libvirt_public.syms      |    1 +
  src/util/xml.h               |    3 ++
  7 files changed, 191 insertions(+), 0 deletions(-)

 diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
 index 07617be..f7b892d 100644
 --- a/include/libvirt/libvirt.h.in
 +++ b/include/libvirt/libvirt.h.in
 @@ -1573,6 +1573,28 @@ int    virDomainBlockJobSetSpeed(virDomainPtr dom, 
 const char *path,
  int           virDomainBlockPull(virDomainPtr dom, const char *path,
                                   unsigned long bandwidth, unsigned int 
 flags);

 +/*
 + * Block I/O throttling support
 + */
 +
 +typedef unsigned long long virDomainBlockIoThrottleUnits;
 +
 +typedef struct _virDomainBlockIoThrottleInfo virDomainBlockIoThrottleInfo;
 +struct _virDomainBlockIoThrottleInfo {
 +    virDomainBlockIoThrottleUnits bps;
 +    virDomainBlockIoThrottleUnits bps_rd;
 +    virDomainBlockIoThrottleUnits bps_wr;
 +    virDomainBlockIoThrottleUnits iops;
 +    virDomainBlockIoThrottleUnits iops_rd;
 +    virDomainBlockIoThrottleUnits iops_wr;
 +};
 +typedef virDomainBlockIoThrottleInfo *virDomainBlockIoThrottleInfoPtr;

 I don't think it is necessary to use a typedef for the unsigned long long 
 values
 in the virDomainBlockIoThrottleInfo structure.  Just use unsigned long long
 directly.

 You might also want to consider using virTypedParameter's for this structure.
 It would allow us to add additional fields in the future.

 +
 +int    virDomainBlockIoThrottle(virDomainPtr dom,

 The libvirt project style is to place the function return value on its own 
 line:

 int
 virDomainBlockIoThrottle(virDomainPtr dom,
 ...

 +                                const char *disk,
 +                                virDomainBlockIoThrottleInfoPtr info,
 +                                virDomainBlockIoThrottleInfoPtr reply,
 +                                unsigned int flags);

  /*
   * NUMA support
 diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
 index 944cfa9..d0ba07e 100644
 --- a/src/conf/domain_conf.c
 +++ b/src/conf/domain_conf.c
 @@ -2422,6 +2422,42 @@ virDomainDiskDefParseXML(virCapsPtr caps,
                  iotag = virXMLPropString(cur, io);
                  ioeventfd = virXMLPropString(cur, ioeventfd);
                  event_idx = virXMLPropString(cur, event_idx);
 +            } else if (xmlStrEqual(cur-name, BAD_CAST iothrottle)) {
 +                char *io_throttle = NULL;
 +                io_throttle = virXMLPropString(cur, bps);
 +                if (io_throttle) {
 +                    def-blkiothrottle.bps = strtoull(io_throttle, NULL, 
 10);
 +                    VIR_FREE(io_throttle);
 +                }
 +
 +                io_throttle = virXMLPropString(cur, bps_rd);
 +                if (io_throttle) {
 +                    def-blkiothrottle.bps_rd = strtoull(io_throttle, NULL, 
 10);
 +                    VIR_FREE(io_throttle);
 +                }
 +
 +                io_throttle = virXMLPropString(cur, bps_wr);
 +                if (io_throttle) {
 +                    def-blkiothrottle.bps_wr = strtoull(io_throttle, NULL, 
 10);
 +                    VIR_FREE(io_throttle);
 +                }
 +
 +                io_throttle = virXMLPropString(cur, iops);
 +                if (io_throttle) {
 +                    def-blkiothrottle.iops = strtoull(io_throttle, NULL, 
 10);
 +                    VIR_FREE(io_throttle);
 +                }
 +
 +                io_throttle = virXMLPropString(cur, iops_rd);
 +                if (io_throttle) {
 +                    def-blkiothrottle.iops_rd = strtoull(io_throttle, 
 NULL, 10);
 +                    VIR_FREE(io_throttle);
 +                }
 +
 +                io_throttle = virXMLPropString(cur, iops_wr);
 +                if (io_throttle

[libvirt] [PATCH v1] domain_conf: add the support for disk I/O throttle setting

2011-09-07 Thread Zhi Yong Wu
The first patch is only used to see if it is suitable for exteeding blkiotune 
to implement disk I/O throttling.

As you have known, when blkiotune is issued without options, it will display 
current tuning parameters; If we exceed it, without options, what should it 
display? both info will? or should one new option be added to separately 
display them?

Signed-off-by: Zhi Yong Wu wu...@linux.vnet.ibm.com
---
 src/conf/domain_conf.c |   18 ++
 src/conf/domain_conf.h |   11 +++
 2 files changed, 29 insertions(+), 0 deletions(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index cce9955..7dd350a 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -9065,6 +9065,24 @@ virDomainDiskDefFormat(virBufferPtr buf,
 virBufferAsprintf(buf,   target dev='%s' bus='%s'/\n,
   def-dst, bus);
 
+/*disk I/O throttling*/
+if (def-blkio.blkiothrottle) {
+virBufferAsprintf(buf,   blkiothrottle\n);
+virBufferAsprintf(buf, bps%llu/bps\n,
+  def-blkiothrottle.bps);
+virBufferAsprintf(buf, bps_rd%llu/bps_rd\n,
+  def-blkiothrottle.bps_rd);
+virBufferAsprintf(buf, bps_wr%llu/bps_wr\n,
+  def-blkiothrottle.bps_wr);
+virBufferAsprintf(buf, iops%llu/iops\n,
+  def-blkiothrottle.iops);
+virBufferAsprintf(buf, iops_rd%llu/iops_rd\n,
+  def-blkiothrottle.iops_rd);
+virBufferAsprintf(buf, iops_wr%llu/iops_wr\n,
+  def-blkiothrottle.iops_wr);
+virBufferAsprintf(buf,   /blkiothrottle\n);
+}
+
 if (def-bootIndex)
 virBufferAsprintf(buf,   boot order='%d'/\n, def-bootIndex);
 if (def-readonly)
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index e218a30..5902377 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -258,6 +258,17 @@ struct _virDomainDiskDef {
 virDomainDiskHostDefPtr hosts;
 char *driverName;
 char *driverType;
+
+/*disk I/O throttling*/
+struct {
+unsigned long long bps;
+unsigned long long bps_rd;
+unsigned long long bps_wr;
+unsigned long long iops;
+unsigned long long iops_rd;
+unsigned long long iops_wr;
+} blkiothrottle;
+
 char *serial;
 int cachemode;
 int error_policy;
-- 
1.7.6

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] sorry, pls ignore, it is not correct.Re: [PATCH v1] domain_conf: add the support for disk I/O throttle setting

2011-09-07 Thread Zhi Yong Wu
On Wed, Sep 07, 2011 at 05:00:35PM +0800, Zhi Yong Wu wrote:
From: Zhi Yong Wu wu...@linux.vnet.ibm.com
To: libvir-list@redhat.com
Cc: stefa...@linux.vnet.ibm.com, a...@us.ibm.com, zwu.ker...@gmail.com, Zhi
 Yong Wu wu...@linux.vnet.ibm.com
Subject: [PATCH v1] domain_conf: add the support for disk I/O throttle
 setting
Date: Wed,  7 Sep 2011 17:00:35 +0800
Message-Id: 1315386035-23319-1-git-send-email-wu...@linux.vnet.ibm.com
X-Mailer: git-send-email 1.7.6
X-Xagent-From: wu...@linux.vnet.ibm.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: vmsdvm4.vnet.ibm.com (XAGENTU3 at VMSDVM4)

The first patch is only used to see if it is suitable for exteeding blkiotune 
to implement disk I/O throttling.

As you have known, when blkiotune is issued without options, it will display 
current tuning parameters; If we exceed it, without options, what should it 
display? both info will? or should one new option be added to separately 
display them?

Signed-off-by: Zhi Yong Wu wu...@linux.vnet.ibm.com
---
 src/conf/domain_conf.c |   18 ++
 src/conf/domain_conf.h |   11 +++
 2 files changed, 29 insertions(+), 0 deletions(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index cce9955..7dd350a 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -9065,6 +9065,24 @@ virDomainDiskDefFormat(virBufferPtr buf,
 virBufferAsprintf(buf,   target dev='%s' bus='%s'/\n,
   def-dst, bus);

+/*disk I/O throttling*/
+if (def-blkio.blkiothrottle) {
+virBufferAsprintf(buf,   blkiothrottle\n);
+virBufferAsprintf(buf, bps%llu/bps\n,
+  def-blkiothrottle.bps);
+virBufferAsprintf(buf, bps_rd%llu/bps_rd\n,
+  def-blkiothrottle.bps_rd);
+virBufferAsprintf(buf, bps_wr%llu/bps_wr\n,
+  def-blkiothrottle.bps_wr);
+virBufferAsprintf(buf, iops%llu/iops\n,
+  def-blkiothrottle.iops);
+virBufferAsprintf(buf, iops_rd%llu/iops_rd\n,
+  def-blkiothrottle.iops_rd);
+virBufferAsprintf(buf, iops_wr%llu/iops_wr\n,
+  def-blkiothrottle.iops_wr);
+virBufferAsprintf(buf,   /blkiothrottle\n);
+}
+
 if (def-bootIndex)
 virBufferAsprintf(buf,   boot order='%d'/\n, def-bootIndex);
 if (def-readonly)
diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h
index e218a30..5902377 100644
--- a/src/conf/domain_conf.h
+++ b/src/conf/domain_conf.h
@@ -258,6 +258,17 @@ struct _virDomainDiskDef {
 virDomainDiskHostDefPtr hosts;
 char *driverName;
 char *driverType;
+
+/*disk I/O throttling*/
+struct {
+unsigned long long bps;
+unsigned long long bps_rd;
+unsigned long long bps_wr;
+unsigned long long iops;
+unsigned long long iops_rd;
+unsigned long long iops_wr;
+} blkiothrottle;
+
 char *serial;
 int cachemode;
 int error_policy;
-- 
1.7.6


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] [PATCH v1] domain_conf: add the support for disk I/O throttle setting

2011-09-07 Thread Zhi Yong Wu
The first patch is only used to see if it is suitable for exteeding blkiotune 
to implement disk I/O throttling.
As you have known, when blkiotune is issued without options, it will display 
current tuning parameters; If we exceed it, without options, what should it 
display? both info will? or should one new option be added to separately 
display them?

Signed-off-by: Zhi Yong Wu wu...@linux.vnet.ibm.com
---
 src/conf/domain_conf.c |   70 ++-
 src/conf/domain_conf.h |   11 +++
 2 files changed, 79 insertions(+), 2 deletions(-)

diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index cce9955..d9108fa 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -2225,6 +2225,7 @@ cleanup:
 static virDomainDiskDefPtr
 virDomainDiskDefParseXML(virCapsPtr caps,
  xmlNodePtr node,
+ xmlXPathContextPtr ctxt,
  virBitmapPtr bootMap,
  unsigned int flags)
 {
@@ -2266,7 +2267,9 @@ virDomainDiskDefParseXML(virCapsPtr caps,
 }
 
 cur = node-children;
+xmlNodePtr oldnode = ctxt-node;
 while (cur != NULL) {
+ctxt-node = cur;
 if (cur-type == XML_ELEMENT_NODE) {
 if ((source == NULL  hosts == NULL) 
 (xmlStrEqual(cur-name, BAD_CAST source))) {
@@ -2362,6 +2365,36 @@ virDomainDiskDefParseXML(virCapsPtr caps,
 iotag = virXMLPropString(cur, io);
 ioeventfd = virXMLPropString(cur, ioeventfd);
 event_idx = virXMLPropString(cur, event_idx);
+} else if (xmlStrEqual(cur-name, BAD_CAST blkiothrottle)) {
+if (virXPathULongLong(string(./blkiothrottle/bps), ctxt,
+def-blkiothrottle.bps)  0) {
+def-blkiothrottle.bps = 0;
+}
+
+if (virXPathULongLong(string(./blkiothrottle/bps_rd), ctxt,
+def-blkiothrottle.bps_rd)  0) {
+def-blkiothrottle.bps_rd = 0;
+}
+
+if (virXPathULongLong(string(./blkiothrottle/bps_wr), ctxt,
+def-blkiothrottle.bps_wr)  0) {
+def-blkiothrottle.bps_wr = 0;
+}
+
+if (virXPathULongLong(string(./blkiothrottle/iops), ctxt,
+def-blkiothrottle.iops)  0) {
+def-blkiothrottle.iops = 0;
+}
+
+if (virXPathULongLong(string(./blkiothrottle/iops_rd), ctxt,
+def-blkiothrottle.iops_rd)  0) {
+def-blkiothrottle.iops_rd = 0;
+}
+
+if (virXPathULongLong(string(./blkiothrottle/iops_wr), ctxt,
+def-blkiothrottle.iops_wr)  0) {
+def-blkiothrottle.iops_wr = 0;
+}
 } else if (xmlStrEqual(cur-name, BAD_CAST readonly)) {
 def-readonly = 1;
 } else if (xmlStrEqual(cur-name, BAD_CAST shareable)) {
@@ -2387,6 +2420,7 @@ virDomainDiskDefParseXML(virCapsPtr caps,
 }
 cur = cur-next;
 }
+ctxt-node = oldnode;
 
 device = virXMLPropString(node, device);
 if (device) {
@@ -5684,9 +5718,13 @@ virDomainDeviceDefPtr virDomainDeviceDefParse(virCapsPtr 
caps,
 
 if (xmlStrEqual(node-name, BAD_CAST disk)) {
 dev-type = VIR_DOMAIN_DEVICE_DISK;
-if (!(dev-data.disk = virDomainDiskDefParseXML(caps, node,
-NULL, flags)))
+xmlNodePtr oldnode = ctxt-node;
+if (!(dev-data.disk = virDomainDiskDefParseXML(caps, node, ctxt,
+NULL, flags))) {
+ctxt-node = oldnode;
 goto error;
+}
+ctxt-node = oldnode;
 } else if (xmlStrEqual(node-name, BAD_CAST lease)) {
 dev-type = VIR_DOMAIN_DEVICE_LEASE;
 if (!(dev-data.lease = virDomainLeaseDefParseXML(node)))
@@ -6725,11 +6763,16 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr 
caps,
 }
 if (n  VIR_ALLOC_N(def-disks, n)  0)
 goto no_memory;
+
+xmlNodePtr oldnode = ctxt-node;
 for (i = 0 ; i  n ; i++) {
+ctxt-node = nodes[i];
 virDomainDiskDefPtr disk = virDomainDiskDefParseXML(caps,
 nodes[i],
+ctxt,
 bootMap,
 flags);
+ctxt-node = oldnode;
 if (!disk)
 goto error;
 
@@ -9065,6 +9108,29 @@ virDomainDiskDefFormat(virBufferPtr buf,
 virBufferAsprintf(buf,   target dev='%s' bus='%s'/\n,
   def-dst, bus);
 
+/*disk I/O throttling*/
+if (def-blkio.blkiothrottle.bps
+|| def-blkio.blkiothrottle.bps_rd

Re: [libvirt] [RFC] block I/O throttling: how to enable in libvirt

2011-09-04 Thread Zhi Yong Wu
On Fri, Sep 02, 2011 at 08:34:19AM -0500, Adam Litke wrote:
From: Adam Litke a...@us.ibm.com
To: Zhi Yong Wu wu...@linux.vnet.ibm.com
Cc: libvir-list@redhat.com
Subject: Re: [libvirt] [RFC] block I/O throttling: how to enable in libvirt
Message-ID: 20110902133419.gw15...@aglitke.rchland.ibm.com
References: 20110901050531.gb17...@f15.cn.ibm.com
 20110902021447.gd19...@f15.cn.ibm.com
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: 20110902021447.gd19...@f15.cn.ibm.com
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Brightmail-Tracker: AA==
X-Xagent-From: a...@us.ibm.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: vmsdvma.vnet.ibm.com (XAGENTU8 at VMSDVMA)

On Fri, Sep 02, 2011 at 10:14:48AM +0800, Zhi Yong Wu wrote:
 On Thu, Sep 01, 2011 at 01:05:31PM +0800, Zhi Yong Wu wrote:
 HI, Adam,
 Now stefan, Daniel, and Gui all suggest extending blkiotune to keep libivrt 
 unified interface. What do you think of it?

It seems like it would be nice to extend the blkiotune API for this use case,
but there is one problem.  The blkiotune interface operates at the VM level and
the QEMU throttling is at the disk level.  So if you want per-disk throttling
granularity the blkiotune API may not be suitable.  I see 3 possible paths
forward:

1) Accept the blkiotune limitation of a global setting and add a throttle 
option
that assigns the same throttle value to all disks.

  blkiotune
shares250/shares
throttle1024/throttle
  /blkiotune

  + Nice for simplicity and ease of implementation
  - May be too simplistic (can't set different throttling for different disks)


2) Extend the blkiotune API to allow per-disk tunings:

  blkiotune
shares250/shares
throttle device='virtio-disk0'1024/throttle
  /blkiotune

  + Use an existing API
  - Unfortunately it doesn't look like this can be done cleanly


3) Use a new API virDomainSetBlkDevParameters() to set device specific
tuning parameters:

blkiotune
  shares250/shares
/blkiotune
...
disk type='file' device='disk'
  driver name='qemu' type='raw'/
  source file='...'/
  target dev='vda' bus='virtio'/
  blkdevtune
throttle1024/throttle
Several I/O throttling tags should be supported, such as thrbps, thrbpsrd, 
thrbpswr, thriops, etc.
  /blkdevtune
/disk

  + Throttle can be specified with the disk it is effecting
  - Proliferation of tuning APIs


Obviously #1 is the best if you can accept the limitations of the API.  #2 
would
#1 is not accepted by us. If it is adopted, the advantage of QEMU I/O 
throttling is not used fully.
be nice if we could figure out how to extend virTypedParameter to represent a
I guess that #2 is less extendable than #3.
(disk, param) pair.  I see #3 as a last resort.  Did I miss any other 
solutions.
To be honest, i prefer to #3. At first, i plan to check how blkiotune works 
currently.
Did I just manage to completely over-complicate this? :)
NO, they are nice. thanks.


Regards,

Zhiyong Wu

-- 
Adam Litke a...@us.ibm.com
IBM Linux Technology Center

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [Qemu-devel] [RFC] block I/O throttling: how to enable in libvirt

2011-09-02 Thread Zhi Yong Wu
On Fri, Sep 02, 2011 at 09:50:42AM +0100, Stefan Hajnoczi wrote:
Date: Fri, 2 Sep 2011 09:50:42 +0100
Message-ID: 
CAJSP0QWc9OcOKxG3jGgYD3r5f=2fqv3snvcrarc2tm0ymp+...@mail.gmail.com
Subject: Re: [Qemu-devel] [RFC] block I/O throttling: how to enable in
 libvirt
From: Stefan Hajnoczi stefa...@gmail.com
To: Zhi Yong Wu wu...@linux.vnet.ibm.com
Cc: libvir-list@redhat.com
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
X-Brightmail-Tracker: AA==
X-Xagent-From: stefa...@gmail.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: vmsdvm4.vnet.ibm.com (XAGENTU5 at VMSDVM4)

On Fri, Sep 2, 2011 at 3:09 AM, Zhi Yong Wu wu...@linux.vnet.ibm.com wrote:
 On Thu, Sep 01, 2011 at 09:11:49AM +0100, Stefan Hajnoczi wrote:
Date: Thu, 1 Sep 2011 09:11:49 +0100
From: Stefan Hajnoczi stefa...@gmail.com
To: Zhi Yong Wu wu...@linux.vnet.ibm.com
Message-ID: 20110901081149.GB14245@stefanha-thinkpad.localdomain
References: 20110901050531.gb17...@f15.cn.ibm.com
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: 20110901050531.gb17...@f15.cn.ibm.com
User-Agent: Mutt/1.5.21 (2010-09-15)
X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6 (newer, 2)
X-Received-From: 74.125.82.173
Cc: libvir-list@redhat.com, guijianf...@cn.fujitsu.com, a...@us.ibm.com,
 qemu-de...@nongnu.org, zwu.ker...@gmail.com, hu...@cn.fujitsu.com
Subject: Re: [Qemu-devel] [RFC] block I/O throttling: how to enable in
       libvirt
X-BeenThere: qemu-de...@nongnu.org
X-Mailman-Version: 2.1.14
Precedence: list
List-Id: qemu-devel.nongnu.org
List-Unsubscribe: https://lists.nongnu.org/mailman/options/qemu-devel,
 mailto:qemu-devel-requ...@nongnu.org?subject=unsubscribe
List-Archive: /archive/html/qemu-devel
List-Post: mailto:qemu-de...@nongnu.org
List-Help: mailto:qemu-devel-requ...@nongnu.org?subject=help
List-Subscribe: https://lists.nongnu.org/mailman/listinfo/qemu-devel,
 mailto:qemu-devel-requ...@nongnu.org?subject=subscribe
X-Mailman-Copy: yes
Errors-To: qemu-devel-bounces+wuzhy=linux.vnet.ibm@nongnu.org
Sender: qemu-devel-bounces+wuzhy=linux.vnet.ibm@nongnu.org
x-cbid: 11090108-7282---010970B0
X-IBM-ISS-SpamDetectors: Score=0; BY=0; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0;
 SC=0; ST=0; TS=0; UL=0; ISC=
X-IBM-ISS-DetailInfo: BY=3.0211; HX=3.0168; KW=3.0007;
 PH=3.0001; SC=3.0001; SDB=6.00067104; UDB=6.00020326;
 UTC=2011-09-01 08:12:15
X-Xagent-From: stefa...@gmail.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: vmsdvm6.vnet.ibm.com (XAGENTU5 at VMSDVM6)

On Thu, Sep 01, 2011 at 01:05:31PM +0800, Zhi Yong Wu wrote:
 On Wed, Aug 31, 2011 at 08:18:19AM +0100, Stefan Hajnoczi wrote:
 On Tue, Aug 30, 2011 at 2:46 PM, Adam Litke a...@us.ibm.com wrote:
  On Tue, Aug 30, 2011 at 09:53:33AM +0100, Stefan Hajnoczi wrote:
  I/O throttling can be applied independently to each -drive attached to
  a guest and supports throughput/iops limits.  For more information on
  this QEMU feature and a comparison with blkio-controller, see Ryan
  Harper's KVM Forum 2011 presentation:
 
  http://www.linux-kvm.org/wiki/images/7/72/2011-forum-keep-a-limit-on-it-io-throttling-in-qemu.pdf
 
  From the presentation, it seems that both the cgroups method the the 
  qemu method
  offer comparable control (assuming a block device) so it might possible 
  to apply
  either method from the same API in a transparent manner.  Am I correct 
  or are we
  suggesting that the Qemu throttling approach should always be used for 
  Qemu
  domains?
 
 QEMU I/O throttling does not provide a proportional share mechanism.
 So you cannot assign weights to VMs and let them receive a fraction of
 the available disk time.  That is only supported by cgroups
 blkio-controller because it requires a global view which QEMU does not
 have.
 
 So I think the two are complementary:
 
 If proportional share should be used on a host block device, use
 cgroups blkio-controller.
 Otherwise use QEMU I/O throttling.
 Stefan,

 Do you agree with introducing one new libvirt command blkiothrottle now?
 If so, i will work on the code draft to make it work.

No, I think that the blkiotune command should be extended to support
QEMU I/O throttling.  This is not new functionality, we already have
cgroups blkio-controller support today.  Therefore I think it makes
sense to keep a unified interface instead of adding a new command.
 QEMU I/O throttling currently don't support those options of blkiotune, such 
 as --live, --config and --current.If those bps/iops settings are modified, 
 it will immediately take effect.

The --live, --config, and --current options are implemented inside
libvirt and do not require hypervisor support.  Take a look at
src/qemu/qemu_driver.c:qemuDomainSetBlkioParameters() to see how these
options are implemented for blkiotune today.
I have understood this, thanks.

BTW: i have seen your comments against block I/O

Re: [libvirt] [Qemu-devel] [RFC] block I/O throttling: how to enable in libvirt

2011-09-01 Thread Zhi Yong Wu
On Fri, Sep 02, 2011 at 09:16:59AM +0800, Gui Jianfeng wrote:
Message-ID: 4e602e8b.6010...@cn.fujitsu.com
Date: Fri, 02 Sep 2011 09:16:59 +0800
From: Gui Jianfeng guijianf...@cn.fujitsu.com
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:6.0) Gecko/20110812
 Thunderbird/6.0
MIME-Version: 1.0
To: Stefan Hajnoczi stefa...@gmail.com, Zhi Yong Wu
 wu...@linux.vnet.ibm.com
References: 20110901050531.gb17...@f15.cn.ibm.com
 20110901081149.GB14245@stefanha-thinkpad.localdomain
In-Reply-To: 20110901081149.GB14245@stefanha-thinkpad.localdomain
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.1FP4|July
 25, 2010) at 2011-09-02 09:15:49, Serialize by Router on
 mailserver/fnst(Release 8.5.1FP4|July 25, 2010) at 2011-09-02 09:15:52,
 Serialize complete at 2011-09-02 09:15:52
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=ISO-8859-1
X-detected-operating-system: by eggs.gnu.org: FreeBSD 6.x (1)
X-Received-From: 222.73.24.84
Cc: libvir-list@redhat.com, hu...@cn.fujitsu.com, qemu-de...@nongnu.org,
 zwu.ker...@gmail.com, a...@us.ibm.com
Subject: Re: [Qemu-devel] [RFC] block I/O throttling: how to enable in
   libvirt
X-BeenThere: qemu-de...@nongnu.org
X-Mailman-Version: 2.1.14
Precedence: list
List-Id: qemu-devel.nongnu.org
List-Unsubscribe: https://lists.nongnu.org/mailman/options/qemu-devel,
 mailto:qemu-devel-requ...@nongnu.org?subject=unsubscribe
List-Archive: /archive/html/qemu-devel
List-Post: mailto:qemu-de...@nongnu.org
List-Help: mailto:qemu-devel-requ...@nongnu.org?subject=help
List-Subscribe: https://lists.nongnu.org/mailman/listinfo/qemu-devel,
 mailto:qemu-devel-requ...@nongnu.org?subject=subscribe
X-Mailman-Copy: yes
Errors-To: qemu-devel-bounces+wuzhy=linux.vnet.ibm@nongnu.org
Sender: qemu-devel-bounces+wuzhy=linux.vnet.ibm@nongnu.org
X-Brightmail-Tracker: AA==
X-Xagent-From: guijianf...@cn.fujitsu.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: vmsdvm9.vnet.ibm.com (XAGENTU at VMSDVM9)

On 2011-9-1 16:11, Stefan Hajnoczi wrote:
 On Thu, Sep 01, 2011 at 01:05:31PM +0800, Zhi Yong Wu wrote:
 On Wed, Aug 31, 2011 at 08:18:19AM +0100, Stefan Hajnoczi wrote:
 On Tue, Aug 30, 2011 at 2:46 PM, Adam Litke a...@us.ibm.com wrote:
 On Tue, Aug 30, 2011 at 09:53:33AM +0100, Stefan Hajnoczi wrote:
 I/O throttling can be applied independently to each -drive attached to
 a guest and supports throughput/iops limits.  For more information on
 this QEMU feature and a comparison with blkio-controller, see Ryan
 Harper's KVM Forum 2011 presentation:

 http://www.linux-kvm.org/wiki/images/7/72/2011-forum-keep-a-limit-on-it-io-throttling-in-qemu.pdf

 From the presentation, it seems that both the cgroups method the the qemu 
 method
 offer comparable control (assuming a block device) so it might possible 
 to apply
 either method from the same API in a transparent manner.  Am I correct or 
 are we
 suggesting that the Qemu throttling approach should always be used for 
 Qemu
 domains?

 QEMU I/O throttling does not provide a proportional share mechanism.
 So you cannot assign weights to VMs and let them receive a fraction of
 the available disk time.  That is only supported by cgroups
 blkio-controller because it requires a global view which QEMU does not
 have.

 So I think the two are complementary:

 If proportional share should be used on a host block device, use
 cgroups blkio-controller.
 Otherwise use QEMU I/O throttling.
 Stefan,

 Do you agree with introducing one new libvirt command blkiothrottle now?
 If so, i will work on the code draft to make it work.
 
 No, I think that the blkiotune command should be extended to support
 QEMU I/O throttling.  This is not new functionality, we already have
 cgroups blkio-controller support today.  Therefore I think it makes
 sense to keep a unified interface instead of adding a new command.

Agreed.
Proportional controlling interfaces and throttling interfaces are all
the same cgroup subsystem. So Just extend blkiotune to add new options
to support throttling tuning.
Hi, Gui,
QEMU block I/O throttling is not relative to cgroup subsystem, i think.
anyway, thanks for your sugguests.


Regards,

Zhi Yong Wu


Thanks,
Gui

 
 Stefan
 
 




--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [RFC] block I/O throttling: how to enable in libvirt

2011-09-01 Thread Zhi Yong Wu
On Thu, Sep 01, 2011 at 09:49:34AM +0100, Daniel P. Berrange wrote:
From: Daniel P. Berrange berra...@redhat.com
To: Stefan Hajnoczi stefa...@gmail.com
Cc: Zhi Yong Wu wu...@linux.vnet.ibm.com, libvir-list@redhat.com,
 a...@us.ibm.com, qemu-de...@nongnu.org, guijianf...@cn.fujitsu.com,
 hu...@cn.fujitsu.com, zwu.ker...@gmail.com
Subject: Re: [RFC] block I/O throttling: how to enable in libvirt
Message-ID: 20110901084934.ga14...@redhat.com
Reply-To: Daniel P. Berrange berra...@redhat.com
References: 20110901050531.gb17...@f15.cn.ibm.com
 20110901081149.GB14245@stefanha-thinkpad.localdomain
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: 20110901081149.GB14245@stefanha-thinkpad.localdomain
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
X-Brightmail-Tracker: AA==
X-Xagent-From: berra...@redhat.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: emeavsc.vnet.ibm.com (XAGENTU at EMEAVSC)

On Thu, Sep 01, 2011 at 09:11:49AM +0100, Stefan Hajnoczi wrote:
 On Thu, Sep 01, 2011 at 01:05:31PM +0800, Zhi Yong Wu wrote:
  On Wed, Aug 31, 2011 at 08:18:19AM +0100, Stefan Hajnoczi wrote:
  On Tue, Aug 30, 2011 at 2:46 PM, Adam Litke a...@us.ibm.com wrote:
   On Tue, Aug 30, 2011 at 09:53:33AM +0100, Stefan Hajnoczi wrote:
   I/O throttling can be applied independently to each -drive attached to
   a guest and supports throughput/iops limits.  For more information on
   this QEMU feature and a comparison with blkio-controller, see Ryan
   Harper's KVM Forum 2011 presentation:
  
   http://www.linux-kvm.org/wiki/images/7/72/2011-forum-keep-a-limit-on-it-io-throttling-in-qemu.pdf
  
   From the presentation, it seems that both the cgroups method the the 
   qemu method
   offer comparable control (assuming a block device) so it might possible 
   to apply
   either method from the same API in a transparent manner.  Am I correct 
   or are we
   suggesting that the Qemu throttling approach should always be used for 
   Qemu
   domains?
  
  QEMU I/O throttling does not provide a proportional share mechanism.
  So you cannot assign weights to VMs and let them receive a fraction of
  the available disk time.  That is only supported by cgroups
  blkio-controller because it requires a global view which QEMU does not
  have.
  
  So I think the two are complementary:
  
  If proportional share should be used on a host block device, use
  cgroups blkio-controller.
  Otherwise use QEMU I/O throttling.
  Stefan,
  
  Do you agree with introducing one new libvirt command blkiothrottle now?
  If so, i will work on the code draft to make it work.
 
 No, I think that the blkiotune command should be extended to support
 QEMU I/O throttling.  This is not new functionality, we already have
 cgroups blkio-controller support today.  Therefore I think it makes
 sense to keep a unified interface instead of adding a new command.

Agreed, the virDomainGetBlkioParameters/virDomainSetBlkioParameters
APIs, and blkio virsh command are intended to be a generic interface
for setting any block related tuning parameters, regardless of what
the underling implementation is. So any use of QEMU I/O throttling
features should be added to those APIs/commands.
thanks for your suggestions.

Regards,

Zhi Yong Wu


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [Qemu-devel] [RFC] block I/O throttling: how to enable in libvirt

2011-09-01 Thread Zhi Yong Wu
On Thu, Sep 01, 2011 at 09:11:49AM +0100, Stefan Hajnoczi wrote:
Date: Thu, 1 Sep 2011 09:11:49 +0100
From: Stefan Hajnoczi stefa...@gmail.com
To: Zhi Yong Wu wu...@linux.vnet.ibm.com
Message-ID: 20110901081149.GB14245@stefanha-thinkpad.localdomain
References: 20110901050531.gb17...@f15.cn.ibm.com
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: 20110901050531.gb17...@f15.cn.ibm.com
User-Agent: Mutt/1.5.21 (2010-09-15)
X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6 (newer, 2)
X-Received-From: 74.125.82.173
Cc: libvir-list@redhat.com, guijianf...@cn.fujitsu.com, a...@us.ibm.com,
 qemu-de...@nongnu.org, zwu.ker...@gmail.com, hu...@cn.fujitsu.com
Subject: Re: [Qemu-devel] [RFC] block I/O throttling: how to enable in
   libvirt
X-BeenThere: qemu-de...@nongnu.org
X-Mailman-Version: 2.1.14
Precedence: list
List-Id: qemu-devel.nongnu.org
List-Unsubscribe: https://lists.nongnu.org/mailman/options/qemu-devel,
 mailto:qemu-devel-requ...@nongnu.org?subject=unsubscribe
List-Archive: /archive/html/qemu-devel
List-Post: mailto:qemu-de...@nongnu.org
List-Help: mailto:qemu-devel-requ...@nongnu.org?subject=help
List-Subscribe: https://lists.nongnu.org/mailman/listinfo/qemu-devel,
 mailto:qemu-devel-requ...@nongnu.org?subject=subscribe
X-Mailman-Copy: yes
Errors-To: qemu-devel-bounces+wuzhy=linux.vnet.ibm@nongnu.org
Sender: qemu-devel-bounces+wuzhy=linux.vnet.ibm@nongnu.org
x-cbid: 11090108-7282---010970B0
X-IBM-ISS-SpamDetectors: Score=0; BY=0; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0;
 SC=0; ST=0; TS=0; UL=0; ISC=
X-IBM-ISS-DetailInfo: BY=3.0211; HX=3.0168; KW=3.0007;
 PH=3.0001; SC=3.0001; SDB=6.00067104; UDB=6.00020326;
 UTC=2011-09-01 08:12:15
X-Xagent-From: stefa...@gmail.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: vmsdvm6.vnet.ibm.com (XAGENTU5 at VMSDVM6)

On Thu, Sep 01, 2011 at 01:05:31PM +0800, Zhi Yong Wu wrote:
 On Wed, Aug 31, 2011 at 08:18:19AM +0100, Stefan Hajnoczi wrote:
 On Tue, Aug 30, 2011 at 2:46 PM, Adam Litke a...@us.ibm.com wrote:
  On Tue, Aug 30, 2011 at 09:53:33AM +0100, Stefan Hajnoczi wrote:
  I/O throttling can be applied independently to each -drive attached to
  a guest and supports throughput/iops limits.  For more information on
  this QEMU feature and a comparison with blkio-controller, see Ryan
  Harper's KVM Forum 2011 presentation:
 
  http://www.linux-kvm.org/wiki/images/7/72/2011-forum-keep-a-limit-on-it-io-throttling-in-qemu.pdf
 
  From the presentation, it seems that both the cgroups method the the qemu 
  method
  offer comparable control (assuming a block device) so it might possible 
  to apply
  either method from the same API in a transparent manner.  Am I correct or 
  are we
  suggesting that the Qemu throttling approach should always be used for 
  Qemu
  domains?
 
 QEMU I/O throttling does not provide a proportional share mechanism.
 So you cannot assign weights to VMs and let them receive a fraction of
 the available disk time.  That is only supported by cgroups
 blkio-controller because it requires a global view which QEMU does not
 have.
 
 So I think the two are complementary:
 
 If proportional share should be used on a host block device, use
 cgroups blkio-controller.
 Otherwise use QEMU I/O throttling.
 Stefan,
 
 Do you agree with introducing one new libvirt command blkiothrottle now?
 If so, i will work on the code draft to make it work.

No, I think that the blkiotune command should be extended to support
QEMU I/O throttling.  This is not new functionality, we already have
cgroups blkio-controller support today.  Therefore I think it makes
sense to keep a unified interface instead of adding a new command.
QEMU I/O throttling currently don't support those options of blkiotune, such as 
--live, --config and --current.If those bps/iops settings are modified, it will 
immediately take effect.

Regards,

Zhi Yong Wu

Stefan


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [RFC] block I/O throttling: how to enable in libvirt

2011-09-01 Thread Zhi Yong Wu
On Thu, Sep 01, 2011 at 01:05:31PM +0800, Zhi Yong Wu wrote:
- Forwarded message from Zhi Yong Wu wu...@linux.vnet.ibm.com -
Date: Thu, 1 Sep 2011 11:55:17 +0800
From: Zhi Yong Wu wu...@linux.vnet.ibm.com
To: Stefan Hajnoczi stefa...@gmail.com
Cc: Daniel P. Berrange berra...@redhat.com, Stefan Hajnoczi
   stefa...@gmail.com, Adam Litke a...@us.ibm.com, Zhi Yong Wu
   wu...@linux.vnet.ibm.com, QEMU Developers qemu-de...@nongnu.org,
   guijianf...@cn.fujitsu.com, hu...@cn.fujitsu.com
Subject: [RFC] block I/O throttling: how to enable in libvirt
Message-ID: 20110901035517.gd16...@f15.cn.ibm.com
References: 
CAEH94Li_C=BOe2gV8NyM48njYWMBAo9MTGc1eUOh-Y=cods...@mail.gmail.com
   CAJSP0QW1CPCokX=F5z7y==vn1s4wh0vtoaq7oj4kc7f7uqm...@mail.gmail.com
   20110830134636.gb29...@aglitke.rchland.ibm.com
   CAJSP0QUHm=y8xjc_kxrg7uffzt3k_xddfqb--sxjc+c0gjo...@mail.gmail.com
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: 
CAJSP0QUHm=y8xjc_kxrg7uffzt3k_xddfqb--sxjc+c0gjo...@mail.gmail.com
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Xagent-From: wu...@linux.vnet.ibm.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: vmsdvma.vnet.ibm.com (XAGENTU7 at VMSDVMA)

On Wed, Aug 31, 2011 at 08:18:19AM +0100, Stefan Hajnoczi wrote:
Subject: Re: The design choice for how to enable block I/O throttling
 function in libvirt
From: Stefan Hajnoczi stefa...@gmail.com
To: Adam Litke a...@us.ibm.com
Cc: libvir-list@redhat.com, Daniel P. Berrange berra...@redhat.com, Zhi
 Yong Wu wu...@linux.vnet.ibm.com, Zhi Yong Wu zwu.ker...@gmail.com
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
X-Brightmail-Tracker: AA==
X-Xagent-From: stefa...@gmail.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: bldgate.vnet.ibm.com (XAGENTU7 at BLDGATE)

On Tue, Aug 30, 2011 at 2:46 PM, Adam Litke a...@us.ibm.com wrote:
 On Tue, Aug 30, 2011 at 09:53:33AM +0100, Stefan Hajnoczi wrote:
 On Tue, Aug 30, 2011 at 3:55 AM, Zhi Yong Wu zwu.ker...@gmail.com wrote:
  I am trying to enable block I/O throttling function in libvirt. But
  currently i met some design questions, and don't make sure if we
  should extend blkiotune to support block I/O throttling or introduce
  one new libvirt command blkiothrottle to cover it or not. If you
  have some better idea, pls don't hesitate to drop your comments.

 A little bit of context: this discussion is about adding libvirt
 support for QEMU disk I/O throttling.

 Thanks for the additional context Stefan.

 Today libvirt supports the cgroups blkio-controller, which handles
 proportional shares and throughput/iops limits on host block devices.
 blkio-controller does not support network file systems (NFS) or other
 QEMU remote block drivers (curl, Ceph/rbd, sheepdog) since they are
 not host block devices.  QEMU I/O throttling works with all types of
 -drive and therefore complements blkio-controller.

 The first question that pops into my mind is: Should a user need to 
 understand
 when to use the cgroups blkio-controller vs. the QEMU I/O throttling 
 method?  In
 my opinion, it would be nice if libvirt had a single interface for block I/O
 throttling and libvirt would decide which mechanism to use based on the 
 type of
 device and the specific limits that need to be set.

Yes, I agree it would be simplest to pick the right mechanism,
depending on the type of throttling the user wants.  More below.

 I/O throttling can be applied independently to each -drive attached to
 a guest and supports throughput/iops limits.  For more information on
 this QEMU feature and a comparison with blkio-controller, see Ryan
 Harper's KVM Forum 2011 presentation:

 http://www.linux-kvm.org/wiki/images/7/72/2011-forum-keep-a-limit-on-it-io-throttling-in-qemu.pdf

 From the presentation, it seems that both the cgroups method the the qemu 
 method
 offer comparable control (assuming a block device) so it might possible to 
 apply
 either method from the same API in a transparent manner.  Am I correct or 
 are we
 suggesting that the Qemu throttling approach should always be used for Qemu
 domains?

QEMU I/O throttling does not provide a proportional share mechanism.
So you cannot assign weights to VMs and let them receive a fraction of
the available disk time.  That is only supported by cgroups
blkio-controller because it requires a global view which QEMU does not
have.

So I think the two are complementary:

If proportional share should be used on a host block device, use
cgroups blkio-controller.
Otherwise use QEMU I/O throttling.
Stefan,

Do you agree with introducing one new libvirt command blkiothrottle now?
If so, i will work on the code draft to make it work.

Daniel and other maintainers,

If you are available, can you make some comments for us?:)
HI, Adam,
Now stefan, Daniel, and Gui all suggest extending blkiotune to keep libivrt 
unified interface. What do you think

[libvirt] [RFC] block I/O throttling: how to enable in libvirt

2011-08-31 Thread Zhi Yong Wu
- Forwarded message from Zhi Yong Wu wu...@linux.vnet.ibm.com -
Date: Thu, 1 Sep 2011 11:55:17 +0800
From: Zhi Yong Wu wu...@linux.vnet.ibm.com
To: Stefan Hajnoczi stefa...@gmail.com
Cc: Daniel P. Berrange berra...@redhat.com, Stefan Hajnoczi
stefa...@gmail.com, Adam Litke a...@us.ibm.com, Zhi Yong Wu
wu...@linux.vnet.ibm.com, QEMU Developers qemu-de...@nongnu.org,
guijianf...@cn.fujitsu.com, hu...@cn.fujitsu.com
Subject: [RFC] block I/O throttling: how to enable in libvirt
Message-ID: 20110901035517.gd16...@f15.cn.ibm.com
References: CAEH94Li_C=BOe2gV8NyM48njYWMBAo9MTGc1eUOh-Y=cods...@mail.gmail.com
CAJSP0QW1CPCokX=F5z7y==vn1s4wh0vtoaq7oj4kc7f7uqm...@mail.gmail.com
20110830134636.gb29...@aglitke.rchland.ibm.com
CAJSP0QUHm=y8xjc_kxrg7uffzt3k_xddfqb--sxjc+c0gjo...@mail.gmail.com
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: 
CAJSP0QUHm=y8xjc_kxrg7uffzt3k_xddfqb--sxjc+c0gjo...@mail.gmail.com
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Xagent-From: wu...@linux.vnet.ibm.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: vmsdvma.vnet.ibm.com (XAGENTU7 at VMSDVMA)

On Wed, Aug 31, 2011 at 08:18:19AM +0100, Stefan Hajnoczi wrote:
Subject: Re: The design choice for how to enable block I/O throttling
 function in libvirt
From: Stefan Hajnoczi stefa...@gmail.com
To: Adam Litke a...@us.ibm.com
Cc: libvir-list@redhat.com, Daniel P. Berrange berra...@redhat.com, Zhi
 Yong Wu wu...@linux.vnet.ibm.com, Zhi Yong Wu zwu.ker...@gmail.com
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
X-Brightmail-Tracker: AA==
X-Xagent-From: stefa...@gmail.com
X-Xagent-To: wu...@linux.vnet.ibm.com
X-Xagent-Gateway: bldgate.vnet.ibm.com (XAGENTU7 at BLDGATE)

On Tue, Aug 30, 2011 at 2:46 PM, Adam Litke a...@us.ibm.com wrote:
 On Tue, Aug 30, 2011 at 09:53:33AM +0100, Stefan Hajnoczi wrote:
 On Tue, Aug 30, 2011 at 3:55 AM, Zhi Yong Wu zwu.ker...@gmail.com wrote:
  I am trying to enable block I/O throttling function in libvirt. But
  currently i met some design questions, and don't make sure if we
  should extend blkiotune to support block I/O throttling or introduce
  one new libvirt command blkiothrottle to cover it or not. If you
  have some better idea, pls don't hesitate to drop your comments.

 A little bit of context: this discussion is about adding libvirt
 support for QEMU disk I/O throttling.

 Thanks for the additional context Stefan.

 Today libvirt supports the cgroups blkio-controller, which handles
 proportional shares and throughput/iops limits on host block devices.
 blkio-controller does not support network file systems (NFS) or other
 QEMU remote block drivers (curl, Ceph/rbd, sheepdog) since they are
 not host block devices.  QEMU I/O throttling works with all types of
 -drive and therefore complements blkio-controller.

 The first question that pops into my mind is: Should a user need to 
 understand
 when to use the cgroups blkio-controller vs. the QEMU I/O throttling method? 
  In
 my opinion, it would be nice if libvirt had a single interface for block I/O
 throttling and libvirt would decide which mechanism to use based on the type 
 of
 device and the specific limits that need to be set.

Yes, I agree it would be simplest to pick the right mechanism,
depending on the type of throttling the user wants.  More below.

 I/O throttling can be applied independently to each -drive attached to
 a guest and supports throughput/iops limits.  For more information on
 this QEMU feature and a comparison with blkio-controller, see Ryan
 Harper's KVM Forum 2011 presentation:

 http://www.linux-kvm.org/wiki/images/7/72/2011-forum-keep-a-limit-on-it-io-throttling-in-qemu.pdf

 From the presentation, it seems that both the cgroups method the the qemu 
 method
 offer comparable control (assuming a block device) so it might possible to 
 apply
 either method from the same API in a transparent manner.  Am I correct or 
 are we
 suggesting that the Qemu throttling approach should always be used for Qemu
 domains?

QEMU I/O throttling does not provide a proportional share mechanism.
So you cannot assign weights to VMs and let them receive a fraction of
the available disk time.  That is only supported by cgroups
blkio-controller because it requires a global view which QEMU does not
have.

So I think the two are complementary:

If proportional share should be used on a host block device, use
cgroups blkio-controller.
Otherwise use QEMU I/O throttling.
Stefan,

Do you agree with introducing one new libvirt command blkiothrottle now?
If so, i will work on the code draft to make it work.

Daniel and other maintainers,

If you are available, can you make some comments for us?:)


Regards,

Zhi Yong Wu

Stefan

- End forwarded message -

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] The design choice for how to enable block I/O throttling function in libvirt

2011-08-30 Thread Zhi Yong Wu
On Tue, Aug 30, 2011 at 3:18 PM, shu ming shum...@linux.vnet.ibm.com wrote:
 See commens below.
 Zhi Yong Wu:

 HI, folks,

 I am trying to enable block I/O throttling function in libvirt. But
 currently i met some design questions, and don't make sure if we
 should extend blkiotune to support block I/O throttling or introduce
 one new libvirt command blkiothrottle to cover it or not. If you
 have some better idea, pls don't hesitate to drop your comments.

 If one new libvirt command blkiothrottle is introduced, I plan to
 design its usage syntax as below:

 virsh # help blkiothrottle
   NAME
     blkiothrottle - Set or display a block disk I/O throttle setting.

   SYNOPSIS
     blkiothrottledomain  device  [--bpsnumber] [--bps_rd
 number] [--bps_wrnumber] [--iopsnumber] [--iops_rdnumber]
 [--iops_wrnumber]

   DESCRIPTION
     Set or display a block disk I/O throttle setting.

   OPTIONS
     [--domain]string   domain name, id or uuid
     [--device]string   block device
     --bpsnumber    total throughput limits in bytes/s
     --bps_rdnumber   read throughput limits in bytes/s
     --bps_wrnumber   write throughput limits in bytes/s
     --iopsnumber   total operation limits in numbers/s
     --iops_rdnumber   read operation limits in numbers/s
     --iops_wrnumber   write operation limits in numbers/s


 How to display the current I/O throttle setting of a specific block device
 here?
It will show as below:
virtio0: bps=xxx, bps_rd=xxx, bps_wr=xxx, iops=xxx, iops_rd=xxx, iops_wr=xxx.

 I prfer to have less command to be as simple as possible for users.  But it
 seems
 that we need another command here instead of having a block IO specific
 command like iothrottle.
I also prefer this, but would like to get other guy's suggestions,
especially maintainers.

 Supposely,  the next step of I/O throttling will be network device limit.
  Shoud we have another
 new command like niciothrottle?
For network device, network cgroup can cover this.






-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] The design choice for how to enable block I/O throttling function in libvirt

2011-08-30 Thread Zhi Yong Wu
On Tue, Aug 30, 2011 at 4:31 PM, shu ming shum...@linux.vnet.ibm.com wrote:
 Zhi Yong Wu:

 On Tue, Aug 30, 2011 at 3:18 PM, shu mingshum...@linux.vnet.ibm.com
  wrote:

 See commens below.
 Zhi Yong Wu:

 HI, folks,

 I am trying to enable block I/O throttling function in libvirt. But
 currently i met some design questions, and don't make sure if we
 should extend blkiotune to support block I/O throttling or introduce
 one new libvirt command blkiothrottle to cover it or not. If you
 have some better idea, pls don't hesitate to drop your comments.

 If one new libvirt command blkiothrottle is introduced, I plan to
 design its usage syntax as below:

 virsh # help blkiothrottle
   NAME
     blkiothrottle - Set or display a block disk I/O throttle setting.

   SYNOPSIS
     blkiothrottledomain    device    [--bpsnumber] [--bps_rd
 number] [--bps_wrnumber] [--iopsnumber] [--iops_rdnumber]
 [--iops_wrnumber]

   DESCRIPTION
     Set or display a block disk I/O throttle setting.

   OPTIONS
     [--domain]string     domain name, id or uuid
     [--device]string     block device
     --bpsnumber      total throughput limits in bytes/s
     --bps_rdnumber     read throughput limits in bytes/s
     --bps_wrnumber     write throughput limits in bytes/s
     --iopsnumber     total operation limits in numbers/s
     --iops_rdnumber     read operation limits in numbers/s
     --iops_wrnumber     write operation limits in numbers/s

 How to display the current I/O throttle setting of a specific block
 device
 here?

 It will show as below:
 virtio0: bps=xxx, bps_rd=xxx, bps_wr=xxx, iops=xxx, iops_rd=xxx,
 iops_wr=xxx.

 With which options to the command?  I guess blkiothrottle domain
 device will display the current setting.
Right







-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] The design choice for how to enable block I/O throttling function in libvirt

2011-08-29 Thread Zhi Yong Wu
HI, folks,

I am trying to enable block I/O throttling function in libvirt. But
currently i met some design questions, and don't make sure if we
should extend blkiotune to support block I/O throttling or introduce
one new libvirt command blkiothrottle to cover it or not. If you
have some better idea, pls don't hesitate to drop your comments.

If one new libvirt command blkiothrottle is introduced, I plan to
design its usage syntax as below:

virsh # help blkiothrottle
  NAME
blkiothrottle - Set or display a block disk I/O throttle setting.

  SYNOPSIS
blkiothrottle domain device [--bps number] [--bps_rd
number] [--bps_wr number] [--iops number] [--iops_rd number]
[--iops_wr number]

  DESCRIPTION
Set or display a block disk I/O throttle setting.

  OPTIONS
[--domain] string  domain name, id or uuid
[--device] string  block device
--bps number   total throughput limits in bytes/s
--bps_rd number  read throughput limits in bytes/s
--bps_wr number  write throughput limits in bytes/s
--iops number  total operation limits in numbers/s
--iops_rd number  read operation limits in numbers/s
--iops_wr number  write operation limits in numbers/s


virsh #

2.) If blkiotune command is extended to enable block I/O throttling function.

virsh # help blkiotune
  NAME
blkiotune - Get or set blkio parameters

  SYNOPSIS
blkiotune domain [--weight number] [--config] [--live]
[--current] [--bps number] [--bps_rd number] [--bps_wr number]
[--iops number] [--iops_rd number] [--iops_wr number]

  DESCRIPTION
Get or set the current blkio parameters for a guest domain.
To get the blkio parameters use following command:

virsh # blkiotune domain

  OPTIONS
[--domain] string  domain name, id or uuid
--weight number  IO Weight in range [100, 1000]
--config affect next boot
--live   affect running domain
--currentaffect current domain

Welcome to your suggestions or comments about how to choose it. thanks.

-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] RFC (V2) New virDomainBlockPull API family to libvirt

2011-08-16 Thread Zhi Yong Wu
On Tue, 2011-08-16 at 09:31 -0500, Adam Litke wrote:
 Stefan has a git repo with QED block streaming support here:
 
 git://repo.or.cz/qemu/stefanha.git stream-command
OK. thanks.
 
 
 On 08/15/2011 08:25 PM, Zhi Yong Wu wrote:
  On Mon, Aug 15, 2011 at 8:36 PM, Adam Litke a...@us.ibm.com wrote:
  On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
  HI, Deniel and Adam.
 
  Have the patchset been merged into libvirt upstream?
 
  Yes they have.  However, the functionality is still missing from qemu.
  The two communities have agreed upon the interface and semantics, but
  work continues on the qemu implementation.  Let me know if you would
  like a link to some qemu patches that support this functionality for qed
  images.
  Sure, If you share it with me, at the same time learning your libvirt
  API, i can also sample this feature.:) anyway, thanks, Adam.
  
 
  --
  Adam Litke
  IBM Linux Technology Center
 
  
  
  
 


-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] RFC (V2) New virDomainBlockPull API family to libvirt

2011-08-16 Thread Zhi Yong Wu
On Tue, Aug 16, 2011 at 11:39 PM, Daniel P. Berrange
berra...@redhat.com wrote:
 On Tue, Aug 16, 2011 at 09:28:27AM +0800, Zhi Yong Wu wrote:
 On Tue, Aug 16, 2011 at 6:52 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
  On Mon, Aug 15, 2011 at 1:36 PM, Adam Litke a...@us.ibm.com wrote:
  On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
  HI, Deniel and Adam.
 
  Have the patchset been merged into libvirt upstream?
 
  Yes they have.  However, the functionality is still missing from qemu.
  The two communities have agreed upon the interface and semantics, but
  work continues on the qemu implementation.  Let me know if you would
  like a link to some qemu patches that support this functionality for qed
  images.
 
  I also have a series to put these commands into QEMU without any image
  format support.  They just return NotSupported but it puts the
  commands into QEMU so we can run the libvirt commands against them.
 Without image format support, it will not be a nice sample for us.:)
 Why did you not implement them?

 Code for block streaming with QED will be going into QEMU in the not
 too distant future too.
Got it. thanks.

 Daniel
 --
 |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org              -o-             http://virt-manager.org :|
 |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|




-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] RFC (V2) New virDomainBlockPull API family to libvirt

2011-08-15 Thread Zhi Yong Wu
On Mon, Aug 15, 2011 at 8:36 PM, Adam Litke a...@us.ibm.com wrote:
 On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
 HI, Deniel and Adam.

 Have the patchset been merged into libvirt upstream?

 Yes they have.  However, the functionality is still missing from qemu.
 The two communities have agreed upon the interface and semantics, but
 work continues on the qemu implementation.  Let me know if you would
 like a link to some qemu patches that support this functionality for qed
 images.
Sure, If you share it with me, at the same time learning your libvirt
API, i can also sample this feature.:) anyway, thanks, Adam.


 --
 Adam Litke
 IBM Linux Technology Center




-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] RFC (V2) New virDomainBlockPull API family to libvirt

2011-08-15 Thread Zhi Yong Wu
On Tue, Aug 16, 2011 at 6:52 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
 On Mon, Aug 15, 2011 at 1:36 PM, Adam Litke a...@us.ibm.com wrote:
 On 08/14/2011 11:40 PM, Zhi Yong Wu wrote:
 HI, Deniel and Adam.

 Have the patchset been merged into libvirt upstream?

 Yes they have.  However, the functionality is still missing from qemu.
 The two communities have agreed upon the interface and semantics, but
 work continues on the qemu implementation.  Let me know if you would
 like a link to some qemu patches that support this functionality for qed
 images.

 I also have a series to put these commands into QEMU without any image
 format support.  They just return NotSupported but it puts the
 commands into QEMU so we can run the libvirt commands against them.
Without image format support, it will not be a nice sample for us.:)
Why did you not implement them?


 Will send those patches to qemu-devel soon.

 Stefan




-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] RFC (V2) New virDomainBlockPull API family to libvirt

2011-08-14 Thread Zhi Yong Wu
HI, Deniel and Adam.

Have the patchset been merged into libvirt upstream?

On Fri, Jul 22, 2011 at 11:01 PM, Adam Litke a...@us.ibm.com wrote:
 Thanks Daniel.  The upstream code is looking good.  I will work on
 adding some documentation to the development guide.

 On 07/22/2011 01:07 AM, Daniel Veillard wrote:
 On Thu, Jul 21, 2011 at 01:55:04PM -0500, Adam Litke wrote:
 Here are the patches to implement the BlockPull/BlockJob API as discussed 
 and
 agreed to.  I am testing with a python script (included for completeness as 
 the
 final patch).  The qemu monitor interface is not expected to change in the
 future.  Stefan is planning to submit placeholder commands for upstream qemu
 until the generic streaming support is implemented.

 Changes since V1:
  - Make virDomainBlockPullAbort() and virDomainGetBlockPullInfo() into a
    generic BlockJob interface.
  - Added virDomainBlockJobSetSpeed()
  - Rename VIR_DOMAIN_EVENT_ID_BLOCK_PULL event to fit into block job API
  - Add bandwidth argument to virDomainBlockPull()

 Summary of changes since first generation patch series:
  - Qemu dropped incremental streaming so remove libvirt incremental
    BlockPull() API
  - Rename virDomainBlockPullAll() to virDomainBlockPull()
  - Changes required to qemu monitor handlers for changed command names

 --

 To help speed the provisioning process for large domains, new QED disks are
 created with backing to a template image.  These disks are configured with
 copy on read such that blocks that are read from the backing file are copied
 to the new disk.  This reduces I/O over a potentially costly path to the
 backing image.

 In such a configuration, there is a desire to remove the dependency on the
 backing image as the domain runs.  To accomplish this, qemu will provide an
 interface to perform sequential copy on read operations during normal VM
 operation.  Once all data has been copied, the disk image's link to the
 backing file is removed.

 The virDomainBlockPull API family brings this functionality to libvirt.

 virDomainBlockPull() instructs the hypervisor to stream the entire device in
 the background.  Progress of this operation can be checked with the function
 virDomainBlockJobInfo().  An ongoing stream can be cancelled with
 virDomainBlockJobAbort().  virDomainBlockJobSetSpeed() allows you to limit 
 the
 bandwidth that the operation may consume.

 An event (VIR_DOMAIN_EVENT_ID_BLOCK_JOB) will be emitted when a disk has 
 been
 fully populated or if a BlockPull() operation was terminated due to an 
 error.
 This event is useful to avoid polling on virDomainBlockJobInfo() for
 completion and could also be used by the security driver to revoke access to
 the backing file when it is no longer needed.

   Thanks Adam for that revised patch set.

     ACK

 It all looked good to me, based on previous review and a last look.
 I just had to fix a few merge conflicts due to new entry points being
 added in the meantime and one commit message, but basically it was clean :-)

   So I pushed the set except 8 of course. I'm not sure if we should try
 to store it in the example, or on the wiki. The Wiki might be a bit more
 logical because I'm not sure we can run the test as is now in all
 setups.


   I think the remaining item would be to add documentation about how to
 use this, the paragraphs above should probably land somewhere on the web
 site, ideally on the development guide
   http://libvirt.org/devguide.html
 but I'm open to suggestions :-)

 Daniel


 --
 Adam Litke
 IBM Linux Technology Center

 --
 libvir-list mailing list
 libvir-list@redhat.com
 https://www.redhat.com/mailman/listinfo/libvir-list




-- 
Regards,

Zhi Yong Wu

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list