Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performance
On Tue, Dec 17, 2013 at 12:33 PM, Amos Kong ak...@redhat.com wrote: In my test host, When I use the egd-socket, it is very slow. So I use a quick souce /dev/urandom, we ignore the egd protocol here, it might be wrong. Can you suggest a way to test this the right way? It seems we should still use egd.pl to setup a daemon socket. But how to make it very quick? We can't verify the performance improvement if the source is too slow. Can we use --bottomless option for egd.pl? it will not decrement entropy count. When I use this option, the speed (without my patches) is about 13 kB/s. Is egd is more likely to be found running *as a substitute* on host machines without a /dev/random device? If so, speed becomes a major issue if it has not been paired with a hardware source, as it gets entropy by using the output of various the programs it calls. In that case, instead of having egd running on the host, would it be better to have the guests run their own copy of egd if needed? This would keep the entropy available on the guests independent of each other, and remove the issue of a single guest overusing and depleting the host's entropy for everyone else. Otherwise, we could use the `--bottomless` option to make it fast for testing, but in practice, as the README suggests it won't be good enough for generating keys. Since it communicates through sockets, we can build the qemu back-end this way. Theoretically, would mixing entropy from egd (software-generated) with /dev/random (hardware event triggered) produce a better entropy source than each of these individually? I know that /dev/random is pretty good, but if it can be mixed with other sources and still be useful, it can be made to last longer. Varad On Wed, Dec 18, 2013 at 3:35 PM, Giuseppe Scrivano gscri...@redhat.com wrote: Markus Armbruster arm...@redhat.com writes: Amos Kong ak...@redhat.com writes: Bugzilla: https://bugs.launchpad.net/qemu/+bug/1253563 We have a requests queue to cache the random data, but the second will come in when the first request is returned, so we always only have one items in the queue. It effects the performance. This patch changes the IOthread to fill a fixed buffer with random data from egd socket, request_entropy() will return data to virtio queue if buffer has available data. (test with a fast source, disguised egd socket) # cat /dev/urandom | nc -l localhost 8003 # qemu .. -chardev socket,host=localhost,port=8003,id=chr0 \ -object rng-egd,chardev=chr0,id=rng0,buf_size=1024 \ -device virtio-rng-pci,rng=rng0 bytes kb/s -- 131072 - 835 65536 - 652 32768 - 356 16384 - 182 8192 - 99 4096 - 52 2048 - 30 1024 - 15 512 -8 256 -4 128 -3 64 -2 I'm not familiar with the rng-egd code, but perhaps my question has value anyway: could agressive reading ahead on a source of randomness cause trouble by depleting the source? Consider a server restarting a few dozen guests after reboot, where each guest's QEMU then tries to slurp in a couple of KiB of randomness. How does this behave? I hit this performance problem while I was working on RNG devices support in virt-manager and I also noticed that the bottleneck is in the egd backend that slowly response to requests. I thought as well about adding a buffer but to handle it trough a new message type in the EGD protocol. The new message type informs the EGD daemon of the buffer size and that the buffer data has a lower priority that the daemon should fill when there are no other queued requests. Could such approach solve the scenario you've described? Cheers, Giuseppe
[Qemu-devel] First Patch, Requesting Comments
Hi! I'm new here, and am working on my first bug. I have posted a patch for Bug#603872 [1] to the list.. It's incomplete right now, but please have a look and tell me if I'm headed in the right direction. (I don't know if I can send incomplete patches to the mailing list for suggestions or if I run into some problems.) Usecase: `qemu-img convert` with -p now shows the write speed. I have a few doubts relating to the patch. 1. I'm calculating the speed using the time taken to run the for(;;) at qemu-img.c:1477. I figured that every time this loop runs, n1 sectors are converted, and so I calculate the write_speed accordingly. Is this correct? 2. I have changed qemu-progress.c:qemu_progress_print() to take in a speed parameter, thinking that it would be the best option. Should I do it some other way instead (maybe write another function to print just speed)? Also, what does IO_BUF_SIZE in the same file relate to? Thanks. Varad [1] https://bugs.launchpad.net/qemu/+bug/603872
[Qemu-devel] First Patch, Requesting Comments‏
Hi! I'm new here, and am working on my first bug. I have posted a patch for Bug#603872 [1]. It's incomplete right now, but please have a look and tell me if I'm headed in the right direction. (I don't know if I can send incomplete patches to the mailing list for suggestions or if I run into some problems.) Usecase: `qemu-img convert` with -p now shows the write speed. I have a few doubts relating to the patch. 1. I'm calculating the speed using the time taken to run the for(;;) at qemu-img.c:1477. I figured that every time this loop runs, n1 sectors are converted, and so I calculate the write_speed accordingly. Is this correct? 2. I have changed qemu-progress.c:qemu_progress_print() to take in a speed parameter, thinking that it would be the best option. Should I do it some other way instead (maybe write another function to print just speed)? Also, what does IO_BUF_SIZE in the same file relate to? Thanks. Varad [1] https://bugs.launchpad.net/qemu/+bug/603872
[Qemu-devel] First Patch, Requesting Comments
Hi! I'm new here, and am working on my first bug. I have posted a patch for Bug#603872 [1] to the list.. It's incomplete right now, but please have a look and tell me if I'm headed in the right direction. (I don't know if I can send incomplete patches to the mailing list for suggestions or if I run into some problems.) Usecase: `qemu-img convert` with -p now shows the write speed. I have a few doubts relating to the patch. 1. I'm calculating the speed using the time taken to run the for(;;) at qemu-img.c:1477. I figured that every time this loop runs, n1 sectors are converted, and so I calculate the write_speed accordingly. Is this correct? 2. I have changed qemu-progress.c:qemu_progress_print() to take in a speed parameter, thinking that it would be the best option. Should I do it some other way instead (maybe write another function to print just speed)? Also, what does IO_BUF_SIZE in the same file relate to? Thanks. Varad [1] https://bugs.launchpad.net/qemu/+bug/603872
[Qemu-devel] First Patch, Requesting Review
Hi! I'm new here, and am working on my first bug. I have posted a patch for Bug#603872 [1]. It's incomplete right now, but please have a look and tell me if I'm headed in the right direction. (I don't know if I can send incomplete patches to the mailing list for suggestions or if I run into some problems.) Usecase: `qemu-img convert` with -p now shows the write speed. I have a few doubts relating to the patch. 1. I'm calculating the speed using the time taken to run the for(;;) at qemu-img.c:1477. I figured that every time this loop runs, n1 sectors are converted, and so I calculate the write_speed accordingly. Is this correct? 2. I have changed qemu-progress.c:qemu_progress_print() to take in a speed parameter, thinking that it would be the best option. Should I do it some other way instead (maybe write another function to print just speed)? Also, what does IO_BUF_SIZE in the same file relate to? Thanks. Varad [1] https://bugs.launchpad.net/qemu/+bug/603872
[Qemu-devel] [PATCH/RFC] qemu-img: show image conversion speed
From: Varad Gautam varadgau...@live.com Calculate and display write speed when converting image with the -p parameter. qemu-progress:qemu_progress_print() now takes speed parameter to print. Signed-off-by: Varad Gautam varadgau...@gmail.com --- include/qemu-common.h |4 ++-- qemu-img.c| 31 +-- util/qemu-progress.c | 11 +++ 3 files changed, 26 insertions(+), 20 deletions(-) diff --git a/include/qemu-common.h b/include/qemu-common.h index 5054836..0e27c68 100644 --- a/include/qemu-common.h +++ b/include/qemu-common.h @@ -349,9 +349,9 @@ size_t qemu_iovec_memset(QEMUIOVector *qiov, size_t offset, bool buffer_is_zero(const void *buf, size_t len); -void qemu_progress_init(int enabled, float min_skip); +void qemu_progress_init(int enabled, float min_skip, float speed); void qemu_progress_end(void); -void qemu_progress_print(float delta, int max); +void qemu_progress_print(float delta, int max, float speed); const char *qemu_get_vm_name(void); #define QEMU_FILE_TYPE_BIOS 0 diff --git a/qemu-img.c b/qemu-img.c index bf3fb4f..cf313ed 100644 --- a/qemu-img.c +++ b/qemu-img.c @@ -945,7 +945,7 @@ static int img_compare(int argc, char **argv) filename2 = argv[optind++]; /* Initialize before goto out */ -qemu_progress_init(progress, 2.0); +qemu_progress_init(progress, 2.0, 0); bs1 = bdrv_new_open(filename1, fmt1, BDRV_O_FLAGS, true, quiet); if (!bs1) { @@ -970,7 +970,7 @@ static int img_compare(int argc, char **argv) total_sectors = MIN(total_sectors1, total_sectors2); progress_base = MAX(total_sectors1, total_sectors2); -qemu_progress_print(0, 100); +qemu_progress_print(0, 100, 0); if (strict total_sectors1 != total_sectors2) { ret = 1; @@ -1053,7 +1053,7 @@ static int img_compare(int argc, char **argv) } } sector_num += nb_sectors; -qemu_progress_print(((float) nb_sectors / progress_base)*100, 100); +qemu_progress_print(((float) nb_sectors / progress_base)*100, 100, 0); } if (total_sectors1 != total_sectors2) { @@ -1101,7 +1101,7 @@ static int img_compare(int argc, char **argv) } } sector_num += nb_sectors; -qemu_progress_print(((float) nb_sectors / progress_base)*100, 100); +qemu_progress_print(((float) nb_sectors / progress_base)*100, 100, 0); } } @@ -1127,7 +1127,7 @@ static int img_convert(int argc, char **argv) const char *fmt, *out_fmt, *cache, *out_baseimg, *out_filename; BlockDriver *drv, *proto_drv; BlockDriverState **bs = NULL, *out_bs = NULL; -int64_t total_sectors, nb_sectors, sector_num, bs_offset; +int64_t total_sectors, nb_sectors, sector_num, bs_offset, time; uint64_t bs_sectors; uint8_t * buf = NULL; const uint8_t *buf1; @@ -1136,7 +1136,7 @@ static int img_convert(int argc, char **argv) QEMUOptionParameter *out_baseimg_param; char *options = NULL; const char *snapshot_name = NULL; -float local_progress = 0; +float local_progress = 0, write_speed = 0; int min_sparse = 8; /* Need at least 4k of zeros for sparse detection */ bool quiet = false; Error *local_err = NULL; @@ -1223,7 +1223,7 @@ static int img_convert(int argc, char **argv) out_filename = argv[argc - 1]; /* Initialize before goto out */ -qemu_progress_init(progress, 2.0); +qemu_progress_init(progress, 2.0, write_speed); if (options is_help_option(options)) { ret = print_block_option_help(out_filename, out_fmt); @@ -1237,7 +1237,7 @@ static int img_convert(int argc, char **argv) goto out; } -qemu_progress_print(0, 100); +qemu_progress_print(0, 100, write_speed); bs = g_malloc0(bs_n * sizeof(BlockDriverState *)); @@ -1460,7 +1460,7 @@ static int img_convert(int argc, char **argv) } } sector_num += n; -qemu_progress_print(local_progress, 100); +qemu_progress_print(local_progress, 100, write_speed); } /* signal EOF to align */ bdrv_write_compressed(out_bs, 0, NULL, 0); @@ -1475,6 +1475,7 @@ static int img_convert(int argc, char **argv) } for(;;) { +time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME); nb_sectors = total_sectors - sector_num; if (nb_sectors = 0) { break; @@ -1547,7 +1548,9 @@ static int img_convert(int argc, char **argv) n -= n1; buf1 += n1 * 512; } -qemu_progress_print(local_progress, 100); +time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - time; +write_speed = (sectors_to_bytes(n1) * 10 / (double) time ) / 1048576 ; +qemu_progress_print(local_progress, 100, write_speed); } } out: @@ -2174,8 +2177,8 @@ static int img_rebase(int
Re: [Qemu-devel] First Patch, Requesting Comments
To discuss the code you really should post the patch, I couldn't find any related code by your link. Hi! I had posted the patch onto the list. I have also put it at [1] now. Thanks. Varad [1] https://bugs.launchpad.net/qemu/+bug/603872
Re: [Qemu-devel] First Patch, Requesting Comments
On Mon, Nov 18, 2013 at 10:31 PM, Eric Blake ebl...@redhat.com wrote: Often-times, this happens when a new contributor fails to realize that the list is moderated, but that we will moderate non-subscriber's messages through. What commonly happens is that a new contributor sends a message, sees nothing on the archives, so they then subscribe, then send again under the assumption that it would help; and sometimes even send a third time when getting impatient that it hasn't shown up in list archives. Then, a few hours later, when the moderator finally releases the gates on the first-time post (a delay which happens whether you posted as a subscriber or non-subscriber), all of the multiple pending messages hit the list, making the new poster sound redundant. Thanks. I was wondering what had happened to the mails when suddenly I saw a lot of them in my inbox! Varad, as a new contributor, you are in a position to possibly help us: what documentation pages did you read to learn where to post your patches, so that we can try and modify those pages to give more hints to help the next guy avoid the embarrassment of double posting? But welcome to the community, and you'll surely find out how to optimize your workflow for the second patch. :) Indeed. And if you haven't already found it, http://wiki.qemu.org/Contribute/SubmitAPatch is a great resource (and again, knowing what pages you HAVE found may help us figure out if we can tweak those pages to more prominently point to this page). Sure. I went through http://wiki.qemu.org/Documentation/GettingStartedDevelopers and http://wiki.qemu.org/Contribute/SubmitAPatch wikis before sending out the patch. I guess it'd be useful to change these to avoid such happenings. Thanks for the reply. Varad