On Mon, Mar 29, 2010 at 4:41 PM, Badari Pulavarty pbad...@us.ibm.com wrote:
+static void handle_io_work(struct work_struct *work)
+{
+ struct vhost_blk_io *vbio;
+ struct vhost_virtqueue *vq;
+ struct vhost_blk *blk;
+ int i, ret = 0;
+ loff_t pos;
+
On Fri, Mar 26, 2010 at 6:53 PM, Eran Rom er...@il.ibm.com wrote:
Christoph Hellwig hch at infradead.org writes:
Ok. cache=writeback performance is something I haven't bothered looking
at at all. For cache=none any streaming write or random workload with
large enough record sizes got
On Thu, Apr 8, 2010 at 5:02 PM, Mohammed Gamal m.gamal...@gmail.com wrote:
On Thu, Apr 8, 2010 at 6:01 PM, Mohammed Gamal m.gamal...@gmail.com wrote:
1- What does the community prefer to use and improve? CIFS, 9p, or
both? And which is better taken up for GSoC.
There have been recent patches
Does len need to be int? Perhaps it should be unsigned int?
Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Apr 21, 2010 at 6:57 AM, Yoshiaki Tamura
tamura.yoshi...@lab.ntt.co.jp wrote:
@@ -454,6 +458,25 @@ void qemu_fflush(QEMUFile *f)
}
}
+void *qemu_realloc_buffer(QEMUFile *f, int size)
+{
+ f-buf_max_size = size;
+
+ f-buf = qemu_realloc(f-buf, f-buf_max_size);
+ if
A new iovec array is allocated when creating a merged write request.
This patch ensures that the iovec array is deleted in addition to its
qiov owner.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
block.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git
Leszek,
Please try the qemu-kvm.git patch I have sent called block: Free
iovec arrays allocated by multiwrite_merge() to confirm that it fixes
the leak.
Thanks,
Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More
A new iovec array is allocated when creating a merged write request.
This patch ensures that the iovec array is deleted in addition to its
qiov owner.
Reported-by: Leszek Urbanski tyg...@moo.pl
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
This fixes the virtio-blk memory leak
On Fri, Apr 23, 2010 at 1:45 AM, Stuart Sheldon s...@actusa.net wrote:
Just upgraded to 12.3 user space tools from 11.0, and now when I attempt
to netboot a guest, it appears that the pxe rom is timing out on dhcp
before the bridge has enough time to come up.
Is there a command line switch to
For reference, my libvirt managed virbr0 has forwarding delay 0. This
is the default:
http://libvirt.org/formatnetwork.html#elementsConnect
I know that my VM is a leaf node (it only has one NIC and isn't going
to create a loop in the network) and therefore it makes sense to
eliminate the
From: Stefan Hajnoczi stefa...@gmail.com
The MALLOC_TRACE output didn't look useful when I tried it either.
Instead I used the following to find origin of the leak. Still very basic but
works better with qemu_malloc() and friends.
This is just a hack but I wanted to share it in case someone
I profiled all executions of
qemu_mutex_lock_iothread(), and found that
it only protects the vl.c:main_loop_wai() thread but does NOT protect
the qemu-kvm.c:kvm_cpu_exec() thread. Did I miss something or is this
a defect?
Hi again, I took another look at qemu-kvm 0.12.3 and here is how I
On Sun, May 9, 2010 at 4:23 PM, Gleb Natapov g...@redhat.com wrote:
Neat! I believe SeaBIOS will see virtio-blk devices as harddisks and
not attempt to boot ISOs? Many existing OS installers probably cannot
boot from virtio-blk, but in the longer term folks might like to get
rid of ATAPI CD-ROMs
diff --git a/src/virtio-blk.c b/src/virtio-blk.c
new file mode 100644
index 000..a41c336
--- /dev/null
+++ b/src/virtio-blk.c
@@ -0,0 +1,155 @@
+// Virtio blovl boot support.
Just noticed the blovl typo.
+ char *desc = malloc_tmphigh(MAXDESCSIZE);
+ struct
Looks good.
Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
From what I can tell SeaBIOS is reading CMOS_BIOS_BOOTFLAG1 and
CMOS_BIOS_BOOTFLAG2 from non-volatile memory. The values index into
bev[], which contains IPL entries (the drives).
Is the order of bev[] entries well-defined? Is there a way for QEMU
command-line to know that the first virtio-blk
How to count and trace KVM perf events:
http://www.linux-kvm.org/page/Perf_events
I want to draw attention to this because traditional kvm_stat and
kvm_trace use has been moving over to the debugfs based tracing
mechanisms. Perhaps we can flesh out documentation and examples of
common perf
between guest and host virtio-blk emulation.
The blk-iopoll infrastructure is enabled system-wide by default:
kernel.blk_iopoll = 1
It can be disabled to always use interrupt-driven mode (useful for comparison):
kernel.blk_iopoll = 0
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
On Fri, May 14, 2010 at 05:30:56PM -0500, Brian Jackson wrote:
Any preliminary numbers? latency, throughput, cpu use? What about comparing
different weights?
I am running benchmarks and will report results when they are in.
Stefan
--
To unsubscribe from this list: send the line unsubscribe
On Tue, May 18, 2010 at 6:18 PM, Avi Kivity a...@redhat.com wrote:
The block multiwrite code pretends to be able to merge overlapping requests,
but doesn't do so in fact. This leads to I/O errors (for example on mkfs
of a large virtio disk).
Are overlapping write requests correct guest
I just caught up on mails and saw you had already mentioned that
overlapping writes from the guest look fishy in the the 1Tb block
issue. Cache mode might still be interesting because it affects how
guest virtio-blk chooses queue ordering mode.
Stefan
--
To unsubscribe from this list: send the
On Wed, May 19, 2010 at 9:09 AM, Avi Kivity a...@redhat.com wrote:
On 05/18/2010 10:22 PM, Stefan Hajnoczi wrote:
What cache= mode are you running?
writeback.
In the cache=writeback case the virtio-blk guest driver does:
blk_queue_ordered(q, QUEUE_ORDERED_DRAIN_FLUSH, ...)
Stefan
On Wed, May 19, 2010 at 10:06 AM, Avi Kivity a...@redhat.com wrote:
In the cache=writeback case the virtio-blk guest driver does:
blk_queue_ordered(q, QUEUE_ORDERED_DRAIN_FLUSH, ...)
I don't follow. What's the implication?
I was wondering whether the queue is incorrectly set to a mode
8330 kvm:kvm_entry# 0.000 M/sec
^--- count since starting perf
The 8330 number means that kvm_entry has fired 8330 times since perf
was started. Like Avi says, you need to keep the perf process
running. I run benchmarks using a script that kills perf after the
benchmark
On Thu, May 20, 2010 at 12:16 PM, Jes Sorensen jes.soren...@redhat.com wrote:
On 05/20/10 13:10, Avi Kivity wrote:
What's wrong with starting perf after the warm-up period and stopping it
before it's done?
It's pretty hard to script.
I use the following. It ain't pretty:
#!/bin/bash
On Thu, May 20, 2010 at 1:14 PM, Avi Kivity a...@redhat.com wrote:
echo 1/sys/kernel/debug/tracing/events/kvm/enable
cat /sys/kernel/debug/tracing/trace_piperesults/trace
perf will enable the events by itself (no?), so all you need is is the perf
call in the middle.
Yes, it will enable
On Thu, May 20, 2010 at 11:16 PM, Christian Brunner c...@muc.de wrote:
2010/5/20 Anthony Liguori anth...@codemonkey.ws:
Both sheepdog and ceph ultimately transmit I/O over a socket to a central
daemon, right? So could we not standardize a protocol for this that both
sheepdog and ceph could
Trace events in QEMU/KVM can be very useful for debugging and performance
analysis. I'd like to discuss tracing support and hope others have an interest
in this feature, too.
Following this email are patches I am using to debug virtio-blk and storage.
The patches provide trivial tracing support,
Trace events should be defined in trace.h. Events are written to
/tmp/trace.log and can be formatted using trace.py. Remember to add
events to trace.py for pretty-printing.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
Makefile.objs |2 +-
trace.c | 64
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
block.c|7 +++
hw/virtio-blk.c|6 ++
posix-aio-compat.c |2 ++
trace.h| 42 +-
trace.py |8
5 files changed, 64
I should have used the [RFC] tag to make it clear that I'm not
proposing these patches for merge, sorry.
Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, May 21, 2010 at 12:13 PM, Jan Kiszka jan.kis...@siemens.com wrote:
Stefan Hajnoczi wrote:
Trace events should be defined in trace.h. Events are written to
/tmp/trace.log and can be formatted using trace.py. Remember to add
events to trace.py for pretty-printing.
When already
On Fri, May 21, 2010 at 5:52 PM, Jan Kiszka jan.kis...@siemens.com wrote:
I would just like to avoid that too much efforts are spent on
re-inventing smart trace buffers, trace daemons, or trace visualization
tools. Then better pick up some semi-perfect approach (e.g. [1], it
unfortunately
--stop-trace $(pgrep qemu)
$ ustctl --destroy-trace $(pgrep qemu)
Trace results can be viewed using lttv-gui.
More information about UST:
http://lttng.org/ust
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
I wrote this as part of trying out UST. Although UST is promising
It is often useful to instrument memory management functions in order to
find leaks or performance problems. This patch adds trace events for
the memory allocation primitives.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
An example of adding trace events.
osdep.c |9
This patch adds trace events that make it possible to observe
virtio-blk.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
block.c|7 +++
hw/virtio-blk.c|7 +++
posix-aio-compat.c |2 ++
trace-events | 14 ++
4 files changed
The following patches against qemu.git allow static trace events to be declared
in QEMU. Trace events use a lightweight syntax and are independent of the
backend tracing system (e.g. LTTng UST).
Supported backends are:
* my trivial tracer (simple)
* LTTng Userspace Tracer (ust)
* no tracer
the trace:
./tracetool --simple --py trace-events events.py # first time only
./simpletrace.py /tmp/trace.log
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
This is the same trivial tracer that I posted previously.
.gitignore |2 +
Makefile.objs |3 +
configure
platforms and to
anticipate new backend tracing systems that are currently maturing,
it is important to be flexible and not tied to one system.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
.gitignore |2 +
Makefile| 17 +--
Makefile.objs |5
On Sun, May 23, 2010 at 5:18 PM, Antoine Martin anto...@nagafix.co.uk wrote:
Why does it work in a chroot for the other options (aio=native, if=ide, etc)
but not for aio!=native??
Looks like I am misunderstanding the semantics of chroot...
It might not be the chroot() semantics but the
On Sun, May 23, 2010 at 1:01 PM, Avi Kivity a...@redhat.com wrote:
On 05/21/2010 12:29 AM, Anthony Liguori wrote:
I'd be more interested in enabling people to build these types of storage
systems without touching qemu.
Both sheepdog and ceph ultimately transmit I/O over a socket to a central
On Mon, May 24, 2010 at 11:20 PM, Anthony Liguori
aligu...@linux.vnet.ibm.com wrote:
+# check if trace backend exists
+
+sh tracetool --$trace_backend --check-backend /dev/null 2 /dev/null
This will fail if objdir != srcdir. You have to qualify tracetool with the
path to srcdir.
Thanks
After the RFC discussion, updated patches which I propose for review and merge:
The following patches against qemu.git allow static trace events to be declared
in QEMU. Trace events use a lightweight syntax and are independent of the
backend tracing system (e.g. LTTng UST).
Supported backends
(void *mcb, int ret) mcb %p ret %d
This builds without the multiwrite_cb trace event.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
v2:
* This patch is new in v2
trace-events |4 +++-
tracetool| 10 --
2 files changed, 11 insertions(+), 3 deletions(-)
diff
platforms and to
anticipate new backend tracing systems that are currently maturing,
it is important to be flexible and not tied to one system.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
v2:
* Use $source_path/tracetool in ./configure
* Include qemu-common.h in trace.h so
--stop-trace $(pgrep qemu)
$ ustctl --destroy-trace $(pgrep qemu)
Trace results can be viewed using lttv-gui.
More information about UST:
http://lttng.org/ust
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
configure |5 +++-
tracetool | 77
This patch adds trace events for virtqueue operations including
adding/removing buffers, notifying the guest, and receiving a notify
from the guest.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
v2:
* This patch is new in v2
hw/virtio.c |8
trace-events |8
This patch adds trace events that make it possible to observe
virtio-blk.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
block.c|7 +++
hw/virtio-blk.c|7 +++
posix-aio-compat.c |2 ++
trace-events | 14 ++
4 files changed
It is often useful to instrument memory management functions in order to
find leaks or performance problems. This patch adds trace events for
the memory allocation primitives.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
v2:
* Record pointer result from allocation functions
the trace:
./simpletrace.py trace-events /tmp/trace.log
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
---
I intend for this tracing backend to be replaced by something based on Prerna's
work. For now it is useful for basic tracing.
v2:
* Make simpletrace.py parse trace-events
On Tue, May 25, 2010 at 1:04 PM, Avi Kivity a...@redhat.com wrote:
Those %ps are more or less useless. We need better ways of identifying
them.
You're right, the vq pointer is useless in isolation. We don't know
which virtio device or which virtqueue number.
With the full context of a trace
On Tue, May 25, 2010 at 2:52 PM, Avi Kivity a...@redhat.com wrote:
Hm. Perhaps we can convert %{type} to %p for backends which don't support
it, and to whatever format they do support for those that do.
True.
Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body
The perf trace command produces the following messages:
For kvm:kvm_apic:
$ perf trace
Warning: Error: expected type 4 but read 7
Warning: Error: expected type 5 but read 0
Warning: failed to read event print fmt for kvm_apic
For kvm:kvm_inj_exception:
$ perf trace
Warning: Error:
I get parse errors when using Steven Rostedt's trace-cmd tool, too.
Any ideas what is going on here? I can provide more info (e.g. trace
files) if necessary.
Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More
On Fri, May 28, 2010 at 05:45:57PM -0400, Steven Rostedt wrote:
On Fri, 2010-05-28 at 17:42 +0100, Stefan Hajnoczi wrote:
I get parse errors when using Steven Rostedt's trace-cmd tool, too.
Any ideas what is going on here? I can provide more info (e.g. trace
files) if necessary.
Does
On Sat, May 29, 2010 at 10:42 AM, Antoine Martin anto...@nagafix.co.uk wrote:
Can someone explain the aio options?
All I can find is this:
# qemu-system-x86_64 -h | grep -i aio
[,addr=A][,id=name][,aio=threads|native]
I assume it means the aio=threads emulates the kernel's aio with
On Sat, May 29, 2010 at 11:34 AM, Christoph Hellwig h...@infradead.org wrote:
In what benchmark do you see worse results for aio=native compared to
aio=threads?
Sequential reads using 4 concurrent dd if=/dev/vdb iflag=direct
of=/dev/null bs=8k processes. 2 vcpu guest with 4 GB RAM, virtio
On Sun, Jan 23, 2011 at 9:35 PM, Emil Langrock emil.langr...@gmx.de wrote:
there is support for ext4 to use the trim ATA command when a block is freed. I
read that there should be an extra command which does that freeing afterwards.
So is it possible to use that information inside the qcow to
On Tue, Jan 25, 2011 at 2:02 PM, Luiz Capitulino lcapitul...@redhat.com wrote:
- Google summer of code 2011 is on, are we interested? (note: I just saw the
news, I don't have any information yet)
http://www.google-melange.com/document/show/gsoc_program/google/gsoc2011/timeline
I'd like to
On Tue, Jan 25, 2011 at 2:26 PM, Avi Kivity a...@redhat.com wrote:
On 01/25/2011 12:06 AM, Anthony Liguori wrote:
On 01/24/2011 07:25 AM, Chris Wright wrote:
Please send in any agenda items you are interested in covering.
- coroutines for the block layer
I have a perpetually in progress
On Fri, Jan 28, 2011 at 1:13 PM, Himanshu Chauhan
hschau...@nulltrace.org wrote:
I just cloned qemu-kvm, built and installed it. But the qemu-img fails
to create any disk image above 1G. The problem as I see is use of
ssize_t for image size. When size is 2G, the check if (sval 0)
succeeds and
2011/1/29 Darko Petrović darko.b.petro...@gmail.com:
Could you please tell me if it is possible to use a block driver that
completely avoids the guest kernel and copies block data directly to/from
the given buffer in the guest userspace?
If yes, how to activate it? If not... why not? :)
2011/1/29 Darko Petrović darko.b.petro...@gmail.com:
Thanks for your help. Actually, I am more interested in doing it from the
outside, if possible (I am not allowed to change the application code). Can
the guest be tricked by KVM somehow, using the appropriate drivers? Just to
clear it out,
On Mon, Jan 31, 2011 at 11:27 AM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2011-01-31 11:03, Avi Kivity wrote:
On 01/27/2011 04:33 PM, Jan Kiszka wrote:
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals arriving
On Mon, Jan 31, 2011 at 12:18 PM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2011-01-31 13:13, Stefan Hajnoczi wrote:
On Mon, Jan 31, 2011 at 11:27 AM, Jan Kiszka jan.kis...@siemens.com wrote:
On 2011-01-31 11:03, Avi Kivity wrote:
On 01/27/2011 04:33 PM, Jan Kiszka wrote:
Found by Stefan
On Mon, Feb 7, 2011 at 10:40 PM, Chris Wright chr...@redhat.com wrote:
Please send in any agenda items you are interested in covering.
Automated builds and testing: maintainer trees, integrating
KVM-Autotest, and QEMU tests we need but don't exist
Stefan
--
To unsubscribe from this list: send
On Tue, Feb 8, 2011 at 3:55 PM, Chris Wright chr...@redhat.com wrote:
Automated builds and testing
- found broken 32-bit
The broken build was found (and fixed?) before automated qemu.git
builds. It's a good motivator though.
Stefan
--
To unsubscribe from this list: send the line unsubscribe
On Wed, Feb 9, 2011 at 1:55 AM, Michael S. Tsirkin m...@redhat.com wrote:
On Wed, Feb 09, 2011 at 12:09:35PM +1030, Rusty Russell wrote:
On Wed, 9 Feb 2011 11:23:45 am Michael S. Tsirkin wrote:
On Wed, Feb 09, 2011 at 11:07:20AM +1030, Rusty Russell wrote:
On Wed, 2 Feb 2011 03:12:22 pm
On Mon, Feb 14, 2011 at 6:15 PM, Thomas Broda tho...@bassfimass.de wrote:
dd'ing /dev/zero to a testfile gives me a throughput of about 400MB/s when
done directly on the hypervisor. If I try this from within a virtual guest,
it's only 19MB/s to 24MB/s if the guest is on the LVM volume (raw
On Tue, Feb 15, 2011 at 10:15 AM, Thomas Broda tho...@bassfimass.de wrote:
On Tue, 15 Feb 2011 09:19:23 +, Stefan Hajnoczi
stefa...@gmail.com wrote:
Did you run dd with O_DIRECT?
dd if=/dev/zero of=path-to-device oflag=direct bs=64k
Using O_DIRECT, performance went down to 11 MB/s
On Mon, Feb 14, 2011 at 10:18 PM, Anthony Liguori anth...@codemonkey.ws wrote:
On 02/14/2011 11:56 AM, Chris Wright wrote:
Please send in any agenda items you are interested in covering.
-rc2 is tagged and waiting for announcement. Please take a look at -rc2 and
make sure there is nothing
On Wed, Feb 16, 2011 at 12:50 PM, Thomas Broda tho...@bassfimass.de wrote:
On Tue, 15 Feb 2011 15:50:00 +, Stefan Hajnoczi
stefa...@gmail.com wrote:
On Tue, Feb 15, 2011 at 10:15 AM, Thomas Broda tho...@bassfimass.de
wrote:
Using O_DIRECT, performance went down to 11 MB/s
On Thu, Feb 17, 2011 at 10:44 AM, Philipp Hahn h...@univention.de wrote:
Hello,
I tried to install Windows 7 Professional 64 Bit with VirtIO 1.16 on an Debian
based system using AMD64 CPUs. During the install, the system froze (progress
bar didn't advance) and kvm was slowly eating CPU cycles
On Thu, Feb 17, 2011 at 12:45 PM, Vadim Rozenfeld vroze...@redhat.com wrote:
On Thu, 2011-02-17 at 13:41 +0200, Gleb Natapov wrote:
On Thu, Feb 17, 2011 at 11:30:25AM +, Stefan Hajnoczi wrote:
On Thu, Feb 17, 2011 at 10:44 AM, Philipp Hahn h...@univention.de wrote:
Hello,
I tried
On Mon, Feb 28, 2011 at 9:10 AM, Paolo Bonzini pbonz...@redhat.com wrote:
+static unsigned __stdcall win32_start_routine(void *arg)
+{
+ struct QemuThreadData data = *(struct QemuThreadData *) arg;
+ QemuThread *thread = data.thread;
+
+ free(arg);
qemu_free(arg);
Stefan
--
To
On Tue, Mar 1, 2011 at 5:01 AM, ya su suya94...@gmail.com wrote:
kvm start with disk image on nfs server, when nfs server can not be
reached, monitor will be blocked. I change io_thread to SCHED_RR
policy, it will work unfluently waiting for disk read/write timeout.
There are some
On Tue, Mar 1, 2011 at 10:23 AM, Kevin Clark kevin.cl...@csoft.co.uk wrote:
Any thoughts/ideas?
There are a lot of variables here. Are you using virtio-blk devices
and Windows guest drivers? Are you using hardware RAID5 on the NFS
server? Could it be a network issue (contention during
On Tue, Mar 1, 2011 at 12:39 PM, ya su suya94...@gmail.com wrote:
how about to remove kvm_handle_io/handle_mmio in kvm_run function
into kvm_main_loop, as these operation belong to io operation, this
will remove the qemu_mutux between the 2 threads. is this an
reasonable thought?
In
On Wed, Mar 2, 2011 at 10:39 AM, ya su suya94...@gmail.com wrote:
io_thread bt as the following:
#0 0x7f3086eaa034 in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x7f3086ea5345 in _L_lock_870 () from /lib64/libpthread.so.0
#2 0x7f3086ea5217 in pthread_mutex_lock () from
On Wed, Mar 2, 2011 at 10:30 AM, Kevin Clark kevin.cl...@csoft.co.uk wrote:
The results are much better, with 64MB writes on the system drive coming in
at 39MB/s and reads 310MB/s. The second drive gives me 94MB/s for writes and
777MB/s for reads for a 64MB file. Again, that's wildy
On Sun, Mar 6, 2011 at 10:25 PM, Mathias Klette mkle...@gmail.com wrote:
I've tested with iozone to compare IO with a linux guest and also to
verify changes made to improve situation - but nothing really helped.
TESTS with iozone -s 4G -r 256k -c -e:
Please use the -I option to bypass the
On Tue, Mar 8, 2011 at 4:00 PM, Anthony Liguori anth...@codemonkey.ws wrote:
http://wiki.qemu.org/Features/QAPI/VirtAgent
That page does not exist. I think you meant this one:
http://wiki.qemu.org/Features/QAPI/GuestAgent
Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm
On Wed, Mar 9, 2011 at 10:57 AM, Corentin Chary
corentin.ch...@gmail.com wrote:
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race conditions.
The IO-Thread provides appropriate locking primitives to avoid that.
This patch
On Wed, Mar 9, 2011 at 10:01 AM, Avi Kivity a...@redhat.com wrote:
On 03/09/2011 11:42 AM, Harald Dunkel wrote:
Hi folks,
would it make sense to make elevator=noop the default
for virtio block devices? Or would you recommend to
set this on the kvm server instead?
I think leaving the
On Fri, Mar 11, 2011 at 5:55 AM, Alexander Graf ag...@suse.de wrote:
On 17.02.2011, at 22:01, Jan Kiszka wrote:
On 2011-02-07 12:19, Jan Kiszka wrote:
We do not check them, and the only arch with non-empty implementations
always returns 0 (this is also true for qemu-kvm).
Signed-off-by:
On Mon, Mar 14, 2011 at 6:05 PM, Guido Winkelmann
guido-k...@thisisnotatest.de wrote:
Does anybody have an idea what might cause this or what might be done about
it?
The lsi_scsi emulation code is incomplete. It does not handle some
situations like the ORDERED commands or message 0x0c.
There
On Mon, Mar 14, 2011 at 10:57 PM, Guido Winkelmann
guido-k...@thisisnotatest.de wrote:
On Monday 14 March 2011 20:32:23 Stefan Hajnoczi wrote:
On Mon, Mar 14, 2011 at 6:05 PM, Guido Winkelmann
guido-k...@thisisnotatest.de wrote:
Does anybody have an idea what might cause this or what might
On Tue, Mar 15, 2011 at 7:47 AM, Alexander Graf ag...@suse.de wrote:
On 15.03.2011, at 08:09, Stefan Hajnoczi wrote:
On Mon, Mar 14, 2011 at 10:57 PM, Guido Winkelmann
guido-k...@thisisnotatest.de wrote:
On Monday 14 March 2011 20:32:23 Stefan Hajnoczi wrote:
On Mon, Mar 14, 2011 at 6:05 PM
On Tue, Mar 15, 2011 at 9:16 AM, Alexander Graf ag...@suse.de wrote:
On 15.03.2011, at 10:03, Stefan Hajnoczi wrote:
On Tue, Mar 15, 2011 at 7:47 AM, Alexander Graf ag...@suse.de wrote:
On 15.03.2011, at 08:09, Stefan Hajnoczi wrote:
On Mon, Mar 14, 2011 at 10:57 PM, Guido Winkelmann
On Fri, Mar 18, 2011 at 12:02 PM, Ben Nagy b...@iagu.net wrote:
KVM commandline (using libvirt):
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
QEMU_AUDIO_DRV=none /usr/local/bin/kvm-snapshot -S -M pc-0.14
-enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name
On Fri, Mar 18, 2011 at 4:06 PM, Guido Winkelmann
guido-k...@thisisnotatest.de wrote:
Am Wednesday 16 March 2011 schrieb Stefan Hajnoczi:
On Tue, Mar 15, 2011 at 1:20 PM, Guido Winkelmann
guido-k...@thisisnotatest.de wrote:
Am Tuesday 15 March 2011 schrieben Sie:
On Mon, Mar 14, 2011 at 10
On Thu, Mar 24, 2011 at 1:38 PM, Conor Murphy
conor_murphy_v...@hotmail.com wrote:
#4 _int_free (av=value optimized out, p=0x7fa24c0009f0, have_lock=0) at
malloc.c:4795
#5 0x004a18fe in qemu_vfree (ptr=0x7fa24c000a00) at oslib-posix.c:76
#6 0x0045af3d in handle_aiocb_rw
On Thu, Mar 24, 2011 at 03:51:36PM -0700, Josh Durgin wrote:
You have sent a malformed patch. Please send patches that follow the
guidelines at http://wiki.qemu.org/Contribute/SubmitAPatch and test that
your mail client is not line wrapping or mangling whitespace.
Stefan
--
To unsubscribe from
On Wed, Mar 30, 2011 at 9:15 AM, Conor Murphy
conor_murphy_v...@hotmail.com wrote:
I'm trying to write a virtio-blk driver for Solaris. I've gotten it to the
point
where Solaris can see the device and create a ZFS file system on it.
However when I try and create a UFS filesystem on the
On Fri, Apr 1, 2011 at 9:33 AM, Alexander Graf ag...@suse.de wrote:
We're constantly developing and improving KVM, implementing new awesome
features or simply fixing bugs in the existing code.
But do people actually use that new code? Are we maybe writing it all in
vain? Wouldn't it be nice
On Sat, Apr 2, 2011 at 4:23 PM, Nikola Ciprich extmaill...@linuxbox.cz wrote:
I'm using virtio network channel, and on one of the guests (the one with
aborted ext4) I use it also for one of virtual disks.
One more interesting thing, I can't reproduce this immediately after guest
boot, but
On Tue, Apr 5, 2011 at 4:07 PM, Chris Wright chr...@redhat.com wrote:
kvm-autotest
- roadmap...refactor to centralize testing (handle the xen-autotest split off)
- internally at RH, lmr and cleber maintain autotest server to test
branches (testing qemu.git daily)
- have good automation for
On Thu, Apr 07, 2011 at 10:14:03AM +0900, Yoshiaki Tamura wrote:
2011/3/29 Josh Durgin josh.dur...@dreamhost.com:
The new format is
rbd:pool/image[@snapshot][:option1=value1[:option2=value2...]]
Each option is used to configure rados, and may be any Ceph option, or
conf.
The conf
On Tue, Apr 5, 2011 at 6:37 PM, Lucas Meneghel Rodrigues l...@redhat.com
wrote:
Thanks for your detailed response!
On Tue, 2011-04-05 at 16:29 +0100, Stefan Hajnoczi wrote:
* Public notifications of breakage, qemu.git/master failures to
qemu-devel mailing list.
^ The challenge is to get
On Mon, Mar 28, 2011 at 04:15:57PM -0700, Josh Durgin wrote:
librbd stacks on top of librados to provide access
to rbd images.
Using librbd simplifies the qemu code, and allows
qemu to use new versions of the rbd format
with few (if any) changes.
Signed-off-by: Josh Durgin
1 - 100 of 883 matches
Mail list logo