Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-16 Thread Wangpan
Hi yunhong,
I agree with you of the taking I/O bandwidth as a resource, but it may be not 
so easy to implement.
Your another thinking about the launch time may be not so terrible, only the 
first boot it will be affected.

2014-02-17



Wangpan



发件人:yunhong jiang 
发送时间:2014-02-15 08:21
主题:Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in 
copy_image while creating new instance?
收件人:"OpenStack Development Mailing List (not for usage 
questions)"
抄送:

On Fri, 2014-02-14 at 10:22 +0100, Sylvain Bauza wrote: 
> Instead of limitating the consumed bandwidth by proposiong a 
> configuration flag (yet another one, and which default value to be 
> set ?), I would propose to only decrease the niceness of the process 
> itself, so that other processes would get first the I/O access. 
> That's not perfect I assume, but that's a quick workaround limitating 
> the frustration. 
>  
>  
> -Sylvain 
>  
Decrease goodness is good for a short term, Some small concerns are, 
will that cause long launch time if the host is I/O intensive? And if 
launch time is billed also, then not fair for the new instance also. 

I think the ideal world is I/O QoS like through cgroup, take I/O 
bandwidth as a resource, and take the copy_image as an consumption of 
the I/O bandwidth resource. 

Thanks 
--jyh 
>  
> 2014-02-14 4:52 GMT+01:00 Wangpan : 
> Currently nova doesn't limit the disk IO bandwidth in 
> copy_image() method while creating a new instance, so the 
> other instances on this host may be affected by this high disk 
> IO consuming operation, and some time-sensitive business(e.g 
> RDS instance with heartbeat) may be switched between master 
> and slave. 
>   
> So can we use the `rsync --bwlimit=${bandwidth} src dst` 
> command instead of `cp src dst` while copy_image in 
> create_image() of libvirt driver, the remote image copy 
> operation also can be limited by `rsync --bwlimit= 
> ${bandwidth}` or `scp -l=${bandwidth}`, this parameter 
> ${bandwidth} can be a new configuration in nova.conf which 
> allow cloud admin to config it, it's default value is 0 which 
> means no limitation, then the instances on this host will be 
> not affected while a new instance with not cached image is 
> creating. 
>   
> the example codes: 
> nova/virt/libvit/utils.py: 
> diff --git a/nova/virt/libvirt/utils.py 
> b/nova/virt/libvirt/utils.py 
> index e926d3d..5d7c935 100644 
> --- a/nova/virt/libvirt/utils.py 
> +++ b/nova/virt/libvirt/utils.py 
> @@ -473,7 +473,10 @@ def copy_image(src, dest, host=None): 
>  # sparse files.  I.E. holes will not be written to 
> DEST, 
>  # rather recreated efficiently.  In addition, since 
>  # coreutils 8.11, holes can be read efficiently too. 
> -execute('cp', src, dest) 
> +if CONF.mbps_in_copy_image > 0: 
> +execute('rsync', '--bwlimit=%s' % 
> CONF.mbps_in_copy_image * 1024, src, dest) 
> +else: 
> +execute('cp', src, dest) 
>  else: 
>  dest = "%s:%s" % (host, dest) 
>  # Try rsync first as that can compress and create 
> sparse dest files. 
> @@ -484,11 +487,22 @@ def copy_image(src, dest, host=None): 
>  # Do a relatively light weight test first, so 
> that we 
>  # can fall back to scp, without having run out of 
> space 
>  # on the destination for example. 
> -execute('rsync', '--sparse', '--compress', 
> '--dry-run', src, dest) 
> +if CONF.mbps_in_copy_image > 0: 
> +execute('rsync', '--sparse', '--compress', 
> '--dry-run', 
> +'--bwlimit=%s' % 
> CONF.mbps_in_copy_image * 1024, src, dest) 
> +else: 
> +execute('rsync', '--sparse', '--compress', 
> '--dry-run', src, dest) 
>  except processutils.ProcessExecutionError: 
> -execute('scp', src, dest) 
> +if CONF.mbps_in_copy_image > 0: 
> +execute('scp', '-l', '%s' % 
> CONF.mbps_in_copy_image * 1024

Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-16 Thread Wangpan
Hi sahid, 
I have tested `scp -l xxx src dst` (local scp copy) and believe that the `-l` 
option is invalid in this situation,
it seems that `-l` only valid in remote copy.


2014-02-17



Wangpan



发件人:sahid 
发送时间:2014-02-14 17:58
主题:Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in 
copy_image while creating new instance?
收件人:"OpenStack Development Mailing List (not for usage 
questions)"
抄送:

It could be a good idea but as Sylvain said how to configure this? Then, what 
about using scp instead of rsync for a local copy? 

- Original Message - 
From: "Wangpan"  
To: "OpenStack Development Mailing List"  
Sent: Friday, February 14, 2014 4:52:20 AM 
Subject: [openstack-dev] [nova] Should we limit the disk IO bandwidth in
copy_image while creating new instance? 

Currently nova doesn't limit the disk IO bandwidth in copy_image() method while 
creating a new instance, so the other instances on this host may be affected by 
this high disk IO consuming operation, and some time-sensitive business(e.g RDS 
instance with heartbeat) may be switched between master and slave. 

So can we use the `rsync --bwlimit=${bandwidth} src dst` command instead of `cp 
src dst` while copy_image in create_image() of libvirt driver, the remote image 
copy operation also can be limited by `rsync --bwlimit=${bandwidth}` or `scp 
-l=${bandwidth}`, this parameter ${bandwidth} can be a new configuration in 
nova.conf which allow cloud admin to config it, it's default value is 0 which 
means no limitation, then the instances on this host will be not affected while 
a new instance with not cached image is creating. 

the example codes: 
nova/virt/libvit/utils.py: 
diff --git a/nova/virt/libvirt/utils.py b/nova/virt/libvirt/utils.py 
index e926d3d..5d7c935 100644 
--- a/nova/virt/libvirt/utils.py 
+++ b/nova/virt/libvirt/utils.py 
@@ -473,7 +473,10 @@ def copy_image(src, dest, host=None): 
 # sparse files.  I.E. holes will not be written to DEST, 
 # rather recreated efficiently.  In addition, since 
 # coreutils 8.11, holes can be read efficiently too. 
-execute('cp', src, dest) 
+if CONF.mbps_in_copy_image > 0: 
+execute('rsync', '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, 
src, dest) 
+else: 
+execute('cp', src, dest) 
 else: 
 dest = "%s:%s" % (host, dest) 
 # Try rsync first as that can compress and create sparse dest files. 
@@ -484,11 +487,22 @@ def copy_image(src, dest, host=None): 
 # Do a relatively light weight test first, so that we 
 # can fall back to scp, without having run out of space 
 # on the destination for example. 
-execute('rsync', '--sparse', '--compress', '--dry-run', src, dest) 
+if CONF.mbps_in_copy_image > 0: 
+execute('rsync', '--sparse', '--compress', '--dry-run', 
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest) 
+else: 
+execute('rsync', '--sparse', '--compress', '--dry-run', src, 
dest) 
 except processutils.ProcessExecutionError: 
-execute('scp', src, dest) 
+if CONF.mbps_in_copy_image > 0: 
+execute('scp', '-l', '%s' % CONF.mbps_in_copy_image * 1024 * 
8, src, dest) 
+else: 
+execute('scp', src, dest) 
 else: 
-execute('rsync', '--sparse', '--compress', src, dest) 
+if CONF.mbps_in_copy_image > 0: 
+execute('rsync', '--sparse', '--compress', 
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest) 
+else: 
+execute('rsync', '--sparse', '--compress', src, dest) 


2014-02-14 



Wangpan 
___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-16 Thread Wangpan
Hi Sylvain,

The default value can be set to 0 ro -1, which means no limitation,
I think decreasing the niceness of nova-compute or cp/rsync/scp will also need 
a configation,
because we cann't decrease it manully while the copy_image is running.

My another consideration is only decreasing the niceness of process will have 
bad effect,
I have tested by `nice 19 cp src dst` and also `ionice -c 2 cp src dst`,
the IO utils and bandwidth consumation seems to be not decreased as before.

2014-02-17



Wangpan



发件人:Sylvain Bauza 
发送时间:2014-02-14 17:22
主题:Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in 
copy_image while creating new instance?
收件人:"OpenStack Development Mailing List (not for usage 
questions)"
抄送:

Instead of limitating the consumed bandwidth by proposiong a configuration flag 
(yet another one, and which default value to be set ?), I would propose to only 
decrease the niceness of the process itself, so that other processes would get 
first the I/O access.
That's not perfect I assume, but that's a quick workaround limitating the 
frustration.


-Sylvain



2014-02-14 4:52 GMT+01:00 Wangpan :

Currently nova doesn't limit the disk IO bandwidth in copy_image() method while 
creating a new instance, so the other instances on this host may be affected by 
this high disk IO consuming operation, and some time-sensitive business(e.g RDS 
instance with heartbeat) may be switched between master and slave.

So can we use the `rsync --bwlimit=${bandwidth} src dst` command instead of `cp 
src dst` while copy_image in create_image() of libvirt driver, the remote image 
copy operation also can be limited by `rsync --bwlimit=${bandwidth}` or `scp 
-l=${bandwidth}`, this parameter ${bandwidth} can be a new configuration in 
nova.conf which allow cloud admin to config it, it's default value is 0 which 
means no limitation, then the instances on this host will be not affected while 
a new instance with not cached image is creating.

the example codes:
nova/virt/libvit/utils.py:
diff --git a/nova/virt/libvirt/utils.py b/nova/virt/libvirt/utils.py
index e926d3d..5d7c935 100644
--- a/nova/virt/libvirt/utils.py
+++ b/nova/virt/libvirt/utils.py
@@ -473,7 +473,10 @@ def copy_image(src, dest, host=None):
 # sparse files.  I.E. holes will not be written to DEST,
 # rather recreated efficiently.  In addition, since
 # coreutils 8.11, holes can be read efficiently too.
-execute('cp', src, dest)
+if CONF.mbps_in_copy_image > 0:
+execute('rsync', '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, 
src, dest)
+else:
+execute('cp', src, dest)
 else:
 dest = "%s:%s" % (host, dest)
 # Try rsync first as that can compress and create sparse dest files.
@@ -484,11 +487,22 @@ def copy_image(src, dest, host=None):
 # Do a relatively light weight test first, so that we
 # can fall back to scp, without having run out of space
 # on the destination for example.
-execute('rsync', '--sparse', '--compress', '--dry-run', src, dest)
+if CONF.mbps_in_copy_image > 0:
+execute('rsync', '--sparse', '--compress', '--dry-run',
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest)
+else:
+execute('rsync', '--sparse', '--compress', '--dry-run', src, 
dest)
 except processutils.ProcessExecutionError:
-execute('scp', src, dest)
+if CONF.mbps_in_copy_image > 0:
+execute('scp', '-l', '%s' % CONF.mbps_in_copy_image * 1024 * 
8, src, dest)
+else:
+execute('scp', src, dest)
 else:
-execute('rsync', '--sparse', '--compress', src, dest)
+if CONF.mbps_in_copy_image > 0:
+execute('rsync', '--sparse', '--compress',
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest)
+else:
+execute('rsync', '--sparse', '--compress', src, dest)


2014-02-14



Wangpan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-14 Thread yunhong jiang
On Fri, 2014-02-14 at 10:22 +0100, Sylvain Bauza wrote:
> Instead of limitating the consumed bandwidth by proposiong a
> configuration flag (yet another one, and which default value to be
> set ?), I would propose to only decrease the niceness of the process
> itself, so that other processes would get first the I/O access.
> That's not perfect I assume, but that's a quick workaround limitating
> the frustration.
> 
> 
> -Sylvain
> 
Decrease goodness is good for a short term, Some small concerns are,
will that cause long launch time if the host is I/O intensive? And if
launch time is billed also, then not fair for the new instance also.

I think the ideal world is I/O QoS like through cgroup, take I/O
bandwidth as a resource, and take the copy_image as an consumption of
the I/O bandwidth resource.

Thanks
--jyh
> 
> 2014-02-14 4:52 GMT+01:00 Wangpan :
> Currently nova doesn't limit the disk IO bandwidth in
> copy_image() method while creating a new instance, so the
> other instances on this host may be affected by this high disk
> IO consuming operation, and some time-sensitive business(e.g
> RDS instance with heartbeat) may be switched between master
> and slave.
>  
> So can we use the `rsync --bwlimit=${bandwidth} src dst`
> command instead of `cp src dst` while copy_image in
> create_image() of libvirt driver, the remote image copy
> operation also can be limited by `rsync --bwlimit=
> ${bandwidth}` or `scp -l=${bandwidth}`, this parameter
> ${bandwidth} can be a new configuration in nova.conf which
> allow cloud admin to config it, it's default value is 0 which
> means no limitation, then the instances on this host will be
> not affected while a new instance with not cached image is
> creating.
>  
> the example codes:
> nova/virt/libvit/utils.py:
> diff --git a/nova/virt/libvirt/utils.py
> b/nova/virt/libvirt/utils.py
> index e926d3d..5d7c935 100644
> --- a/nova/virt/libvirt/utils.py
> +++ b/nova/virt/libvirt/utils.py
> @@ -473,7 +473,10 @@ def copy_image(src, dest, host=None):
>  # sparse files.  I.E. holes will not be written to
> DEST,
>  # rather recreated efficiently.  In addition, since
>  # coreutils 8.11, holes can be read efficiently too.
> -execute('cp', src, dest)
> +if CONF.mbps_in_copy_image > 0:
> +execute('rsync', '--bwlimit=%s' %
> CONF.mbps_in_copy_image * 1024, src, dest)
> +else:
> +execute('cp', src, dest)
>  else:
>  dest = "%s:%s" % (host, dest)
>  # Try rsync first as that can compress and create
> sparse dest files.
> @@ -484,11 +487,22 @@ def copy_image(src, dest, host=None):
>  # Do a relatively light weight test first, so
> that we
>  # can fall back to scp, without having run out of
> space
>  # on the destination for example.
> -execute('rsync', '--sparse', '--compress',
> '--dry-run', src, dest)
> +if CONF.mbps_in_copy_image > 0:
> +execute('rsync', '--sparse', '--compress',
> '--dry-run',
> +'--bwlimit=%s' %
> CONF.mbps_in_copy_image * 1024, src, dest)
> +else:
> +execute('rsync', '--sparse', '--compress',
> '--dry-run', src, dest)
>  except processutils.ProcessExecutionError:
> -execute('scp', src, dest)
> +if CONF.mbps_in_copy_image > 0:
> +execute('scp', '-l', '%s' %
> CONF.mbps_in_copy_image * 1024 * 8, src, dest)
> +else:
> +execute('scp', src, dest)
>  else:
> -execute('rsync', '--sparse', '--compress', src,
> dest)
> +if CONF.mbps_in_copy_image > 0:
> +execute('rsync', '--sparse', '--compress',
> +'--bwlimit=%s' %
> CONF.mbps_in_copy_image * 1024, src, dest)
> +else:
> +execute('rsync', '--sparse', '--compress',
> src, dest)
>  
>  
> 2014-02-14
> 
> __
> Wangpan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@li

Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-14 Thread sahid
It could be a good idea but as Sylvain said how to configure this? Then, what 
about using scp instead of rsync for a local copy?

- Original Message -
From: "Wangpan" 
To: "OpenStack Development Mailing List" 
Sent: Friday, February 14, 2014 4:52:20 AM
Subject: [openstack-dev] [nova] Should we limit the disk IO bandwidth in
copy_image while creating new instance?

Currently nova doesn't limit the disk IO bandwidth in copy_image() method while 
creating a new instance, so the other instances on this host may be affected by 
this high disk IO consuming operation, and some time-sensitive business(e.g RDS 
instance with heartbeat) may be switched between master and slave.

So can we use the `rsync --bwlimit=${bandwidth} src dst` command instead of `cp 
src dst` while copy_image in create_image() of libvirt driver, the remote image 
copy operation also can be limited by `rsync --bwlimit=${bandwidth}` or `scp 
-l=${bandwidth}`, this parameter ${bandwidth} can be a new configuration in 
nova.conf which allow cloud admin to config it, it's default value is 0 which 
means no limitation, then the instances on this host will be not affected while 
a new instance with not cached image is creating.

the example codes:
nova/virt/libvit/utils.py:
diff --git a/nova/virt/libvirt/utils.py b/nova/virt/libvirt/utils.py
index e926d3d..5d7c935 100644
--- a/nova/virt/libvirt/utils.py
+++ b/nova/virt/libvirt/utils.py
@@ -473,7 +473,10 @@ def copy_image(src, dest, host=None):
 # sparse files.  I.E. holes will not be written to DEST,
 # rather recreated efficiently.  In addition, since
 # coreutils 8.11, holes can be read efficiently too.
-execute('cp', src, dest)
+if CONF.mbps_in_copy_image > 0:
+execute('rsync', '--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, 
src, dest)
+else:
+execute('cp', src, dest)
 else:
 dest = "%s:%s" % (host, dest)
 # Try rsync first as that can compress and create sparse dest files.
@@ -484,11 +487,22 @@ def copy_image(src, dest, host=None):
 # Do a relatively light weight test first, so that we
 # can fall back to scp, without having run out of space
 # on the destination for example.
-execute('rsync', '--sparse', '--compress', '--dry-run', src, dest)
+if CONF.mbps_in_copy_image > 0:
+execute('rsync', '--sparse', '--compress', '--dry-run',
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest)
+else:
+execute('rsync', '--sparse', '--compress', '--dry-run', src, 
dest)
 except processutils.ProcessExecutionError:
-execute('scp', src, dest)
+if CONF.mbps_in_copy_image > 0:
+execute('scp', '-l', '%s' % CONF.mbps_in_copy_image * 1024 * 
8, src, dest)
+else:
+execute('scp', src, dest)
 else:
-execute('rsync', '--sparse', '--compress', src, dest)
+if CONF.mbps_in_copy_image > 0:
+execute('rsync', '--sparse', '--compress',
+'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024, src, 
dest)
+else:
+execute('rsync', '--sparse', '--compress', src, dest)


2014-02-14



Wangpan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we limit the disk IO bandwidth in copy_image while creating new instance?

2014-02-14 Thread Sylvain Bauza
Instead of limitating the consumed bandwidth by proposiong a configuration
flag (yet another one, and which default value to be set ?), I would
propose to only decrease the niceness of the process itself, so that other
processes would get first the I/O access.
That's not perfect I assume, but that's a quick workaround limitating the
frustration.

-Sylvain


2014-02-14 4:52 GMT+01:00 Wangpan :

>   Currently nova doesn't limit the disk IO bandwidth in copy_image()
> method while creating a new instance, so the other instances on this host
> may be affected by this high disk IO consuming operation, and some
> time-sensitive business(e.g RDS instance with heartbeat) may be switched
> between master and slave.
>
> So can we use the `rsync --bwlimit=${bandwidth} src dst` command instead
> of `cp src dst` while copy_image in create_image() of libvirt driver, the
> remote image copy operation also can be limited by `rsync
> --bwlimit=${bandwidth}` or `scp -l=${bandwidth}`, this parameter
> ${bandwidth} can be a new configuration in nova.conf which allow cloud
> admin to config it, it's default value is 0 which means no limitation, then
> the instances on this host will be not affected while a new instance with
> not cached image is creating.
>
> the example codes:
> nova/virt/libvit/utils.py:
> diff --git a/nova/virt/libvirt/utils.py b/nova/virt/libvirt/utils.py
> index e926d3d..5d7c935 100644
> --- a/nova/virt/libvirt/utils.py
> +++ b/nova/virt/libvirt/utils.py
> @@ -473,7 +473,10 @@ def copy_image(src, dest, host=None):
>  # sparse files.  I.E. holes will not be written to DEST,
>  # rather recreated efficiently.  In addition, since
>  # coreutils 8.11, holes can be read efficiently too.
> -execute('cp', src, dest)
> +if CONF.mbps_in_copy_image > 0:
> +execute('rsync', '--bwlimit=%s' % CONF.mbps_in_copy_image *
> 1024, src, dest)
> +else:
> +execute('cp', src, dest)
>  else:
>  dest = "%s:%s" % (host, dest)
>  # Try rsync first as that can compress and create sparse dest
> files.
> @@ -484,11 +487,22 @@ def copy_image(src, dest, host=None):
>  # Do a relatively light weight test first, so that we
>  # can fall back to scp, without having run out of space
>  # on the destination for example.
> -execute('rsync', '--sparse', '--compress', '--dry-run', src,
> dest)
> +if CONF.mbps_in_copy_image > 0:
> +execute('rsync', '--sparse', '--compress', '--dry-run',
> +'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024,
> src, dest)
> +else:
> +execute('rsync', '--sparse', '--compress', '--dry-run',
> src, dest)
>  except processutils.ProcessExecutionError:
> -execute('scp', src, dest)
> +if CONF.mbps_in_copy_image > 0:
> +execute('scp', '-l', '%s' % CONF.mbps_in_copy_image *
> 1024 * 8, src, dest)
> +else:
> +execute('scp', src, dest)
>  else:
> -execute('rsync', '--sparse', '--compress', src, dest)
> +if CONF.mbps_in_copy_image > 0:
> +execute('rsync', '--sparse', '--compress',
> +'--bwlimit=%s' % CONF.mbps_in_copy_image * 1024,
> src, dest)
> +else:
> +execute('rsync', '--sparse', '--compress', src, dest)
>
>
> 2014-02-14
> --
>  Wangpan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev