08.04.2010 09:07, Thomas Mueller wrote:
[]
This helped alot:
I enabled deadline block scheduler instead of the default cfq on the
host system. tested with: Host Debian with scheduler deadline, Guest
Win2008 with Virtio and cache=none. (26MB/s to 50MB/s boost measured)
Maybe this is also true
Am Thu, 08 Apr 2010 10:05:09 +0400 schrieb Michael Tokarev:
08.04.2010 09:07, Thomas Mueller wrote: []
This helped alot:
I enabled deadline block scheduler instead of the default cfq on
the host system. tested with: Host Debian with scheduler deadline,
Guest Win2008 with Virtio and
Gleb Natapov wrote:
On Thu, Apr 08, 2010 at 02:27:53PM +0900, Yoshiaki Tamura wrote:
Avi Kivity wrote:
On 04/07/2010 08:21 PM, Yoshiaki Tamura wrote:
The problem here is that, I needed to transfer the VM state which is
just *before* the output to the devices. Otherwise, the VM state has
Am Thu, 08 Apr 2010 06:09:05 + schrieb Thomas Mueller:
Am Thu, 08 Apr 2010 10:05:09 +0400 schrieb Michael Tokarev:
08.04.2010 09:07, Thomas Mueller wrote: []
This helped alot:
I enabled deadline block scheduler instead of the default cfq on
the host system. tested with: Host Debian
On Thu, Apr 08, 2010 at 02:27:53PM +0900, Yoshiaki Tamura wrote:
Currently we complete instructions for output operations and leave them
incomplete for input operations. Deferring completion for output
operations should work, except it may break the vmware backdoor port
(see hw/vmport.c),
On 04/08/2010 08:27 AM, Yoshiaki Tamura wrote:
The requirement is that the guest must always be able to replay at
least the instruction which triggered the synchronization on the primary.
You have two choices:
- complete execution of the instruction in both the kernel and the
device
On Thu, Apr 08, 2010 at 10:17:01AM +0300, Avi Kivity wrote:
On 04/08/2010 08:27 AM, Yoshiaki Tamura wrote:
The requirement is that the guest must always be able to replay at
least the instruction which triggered the synchronization on the
primary.
You have two choices:
- complete
Avi Kivity wrote:
On 04/07/2010 11:24 PM, Marcelo Tosatti wrote:
During initialization, WinXP.32 switches to virtual-8086 mode, with
paging enabled, to use VGABIOS functions.
Since enter_pmode unconditionally clears IOPL and VM bits in RFLAGS
flags = vmcs_readl(GUEST_RFLAGS);
On 04/08/2010 02:13 AM, Richard Simpson wrote:
gordon Code # ./check-nx
nx: enabled
gordon Code #
OK, seems to be enabled just fine. Any other ideas? I am beginning to
get that horrible feeling that there isn't a real problem and it is just
me being dumb!
I really hope so,
On 04/08/2010 10:22 AM, Jan Kiszka wrote:
Avi Kivity wrote:
On 04/07/2010 11:24 PM, Marcelo Tosatti wrote:
During initialization, WinXP.32 switches to virtual-8086 mode, with
paging enabled, to use VGABIOS functions.
Since enter_pmode unconditionally clears IOPL and VM bits in
Gleb Natapov wrote:
On Thu, Apr 08, 2010 at 02:27:53PM +0900, Yoshiaki Tamura wrote:
Currently we complete instructions for output operations and leave them
incomplete for input operations. Deferring completion for output
operations should work, except it may break the vmware backdoor port
(see
On 04/08/2010 10:30 AM, Yoshiaki Tamura wrote:
To answer your question, it should be possible to implement.
The down side is that after going into KVM to make the guest state to
consistent, we need to go back to qemu to actually transfer the guest,
and this bounce would introduce another
Avi Kivity wrote:
On 04/08/2010 10:22 AM, Jan Kiszka wrote:
Avi Kivity wrote:
On 04/07/2010 11:24 PM, Marcelo Tosatti wrote:
During initialization, WinXP.32 switches to virtual-8086 mode, with
paging enabled, to use VGABIOS functions.
Since enter_pmode unconditionally clears
On 04/08/2010 10:54 AM, Jan Kiszka wrote:
Looks like KVM_SET_REGS should write rmode.save_iopl (and a new save_vm)?
Just like we manipulate the flags for guest debugging in the
set/get_rflags vendor handlers, the same should happen for IOPL and VM.
This is no business of
Gleb Natapov wrote:
On Thu, Apr 08, 2010 at 10:17:01AM +0300, Avi Kivity wrote:
On 04/08/2010 08:27 AM, Yoshiaki Tamura wrote:
The requirement is that the guest must always be able to replay at
least the instruction which triggered the synchronization on the
primary.
You have two choices:
Avi Kivity wrote:
On 04/08/2010 10:30 AM, Yoshiaki Tamura wrote:
To answer your question, it should be possible to implement.
The down side is that after going into KVM to make the guest state to
consistent, we need to go back to qemu to actually transfer the guest,
and this bounce would
On 04/08/2010 11:30 AM, Yoshiaki Tamura wrote:
If I transferred a VM after I/O operations, let's say the VM sent an
TCP ACK to the client, and if a hardware failure occurred to the
primary during the VM transferring *but the client received the TCP
ACK*, the secondary will resume from the
On 04/08/2010 11:10 AM, Yoshiaki Tamura wrote:
If the responses to the mmio or pio request are exactly the same,
then the replay will happen exactly the same.
I agree. What I'm wondering is how can we guarantee that the
responses are the same...
I don't think you can in the general case.
On Wed, Apr 07, 2010 at 02:07:18PM -0700, David Stevens wrote:
kvm-ow...@vger.kernel.org wrote on 04/07/2010 11:09:30 AM:
On Wed, Apr 07, 2010 at 10:37:17AM -0700, David Stevens wrote:
Thanks!
There's some whitespace damage, are you sending with your new
sendmail setup? It
Avi Kivity wrote:
On 04/07/2010 11:38 PM, Richard Simpson wrote:
On 07/04/10 13:23, Avi Kivity wrote:
Run as root, please. And check first that you have a file named
/dev/cpu/0/msr.
Doh!
gordon Code # ./check-nx
nx: enabled
gordon Code #
OK, seems to be enabled just fine. Any other
From: Xin Xiaohui xiaohui@intel.com
---
Michael,
This is a small patch for the write logging issue with async queue.
I have made a __vhost_get_vq_desc() func which may compute the log
info with any valid buffer index. The __vhost_get_vq_desc() is
coming from the code in vq_get_vq_desc().
And
Avi Kivity wrote:
On 04/08/2010 11:30 AM, Yoshiaki Tamura wrote:
If I transferred a VM after I/O operations, let's say the VM sent an
TCP ACK to the client, and if a hardware failure occurred to the
primary during the VM transferring *but the client received the TCP
ACK*, the secondary will
On Thu, Apr 08, 2010 at 10:05:09AM +0400, Michael Tokarev wrote:
LVM volumes. This is because with cache=none, the virtual disk
image is opened with O_DIRECT flag, which means all I/O bypasses
host scheduler and buffer cache.
O_DIRECT does not bypass the I/O scheduler, only the page cache.
On 04/08/2010 12:14 PM, Yoshiaki Tamura wrote:
I don't think you can in the general case. But if you gate output at the
device level, instead of the instruction level, the problem goes
away, no?
Yes, it should.
To implement this, we need to make No.3 to be called asynchronously.
If qemu is
2010/4/8 Avi Kivity a...@redhat.com:
On 04/08/2010 12:14 PM, Yoshiaki Tamura wrote:
I don't think you can in the general case. But if you gate output at the
device level, instead of the instruction level, the problem goes away,
no?
Yes, it should.
To implement this, we need to make No.3 to
On 04/08/2010 04:42 PM, Yoshiaki Tamura wrote:
Yes, you can release the I/O from the iothread instead of the vcpu thread.
You can make virtio_net_handle_tx() disable virtio notifications and
initiate state sync and return, when state sync continues you can call the
original
On 04/07/2010 11:49 AM, Feng Yang wrote:
Add function run_autotest_background and wait_autotest_background to
kvm_test_utils.py. This two functions is used in ioquit test script.
Signed-off-by: Feng Yang fy...@redhat.com
---
client/tests/kvm/kvm_test_utils.py | 68
Avi Kivity wrote:
On 03/24/2010 06:40 PM, Joerg Roedel wrote:
Looks trivial to find a guest, less so with enumerating (still doable).
Not so trival and even more likely to break. Even it perf has the pid of
the process and wants to find the directory it has to do:
1. Get the uid of
On 04/07/2010 11:49 AM, Feng Yang wrote:
Signed-off-by: Feng Yang fy...@redhat.com
---
client/tests/kvm/tests/ioquit.py | 54
client/tests/kvm/tests_base.cfg.sample |4 ++
2 files changed, 58 insertions(+), 0 deletions(-)
create mode 100644
On Thu, Apr 08, 2010 at 11:05:56AM +0300, Avi Kivity wrote:
On 04/08/2010 10:54 AM, Jan Kiszka wrote:
Looks like KVM_SET_REGS should write rmode.save_iopl (and a new save_vm)?
Just like we manipulate the flags for guest debugging in the
set/get_rflags vendor handlers, the same should
On 04/08/2010 05:16 PM, Marcelo Tosatti wrote:
On Thu, Apr 08, 2010 at 11:05:56AM +0300, Avi Kivity wrote:
On 04/08/2010 10:54 AM, Jan Kiszka wrote:
Looks like KVM_SET_REGS should write rmode.save_iopl (and a new save_vm)?
Just like we manipulate the flags for
On Thu, Apr 08, 2010 at 09:54:35AM +0200, Jan Kiszka wrote:
The following patch fixes it, but it has some drawbacks:
- cpu_synchronize_state+writeback is noticeably slow with tpr patching,
this makes it slower.
Isn't it a very rare event?
It has to be -
Marcelo Tosatti wrote:
On Thu, Apr 08, 2010 at 11:05:56AM +0300, Avi Kivity wrote:
On 04/08/2010 10:54 AM, Jan Kiszka wrote:
Looks like KVM_SET_REGS should write rmode.save_iopl (and a new save_vm)?
Just like we manipulate the flags for guest debugging in the
set/get_rflags vendor handlers,
Hi,
Now that Cam is almost done with his ivshmem patches, I was thinking
of another idea for GSoC which is improving the pass-though
filesystems.
I've got some questions on that:
1- What does the community prefer to use and improve? CIFS, 9p, or
both? And which is better taken up for GSoC.
2-
On Thu, Apr 8, 2010 at 6:01 PM, Mohammed Gamal m.gamal...@gmail.com wrote:
Hi,
Now that Cam is almost done with his ivshmem patches, I was thinking
of another idea for GSoC which is improving the pass-though
filesystems.
I've got some questions on that:
1- What does the community prefer to
On Fri, Mar 26, 2010 at 6:53 PM, Eran Rom er...@il.ibm.com wrote:
Christoph Hellwig hch at infradead.org writes:
Ok. cache=writeback performance is something I haven't bothered looking
at at all. For cache=none any streaming write or random workload with
large enough record sizes got
On Thu, Apr 8, 2010 at 5:02 PM, Mohammed Gamal m.gamal...@gmail.com wrote:
On Thu, Apr 8, 2010 at 6:01 PM, Mohammed Gamal m.gamal...@gmail.com wrote:
1- What does the community prefer to use and improve? CIFS, 9p, or
both? And which is better taken up for GSoC.
There have been recent patches
Hi!
I am working on a light-weight KVM userspace launcher for Linux and am
bit stuck with a guest Linux kernel restarting when it tries to enter
long mode.
The register dump looks like this:
penb...@tiger:~/vm$ ./kvm bzImage
KVM exit reason: 8 (KVM_EXIT_SHUTDOWN)
Registers:
rip:
On 04/08/2010 09:26 PM, Pekka Enberg wrote:
Hi!
I am working on a light-weight KVM userspace launcher for Linux and am
bit stuck with a guest Linux kernel restarting when it tries to enter
long mode.
The register dump looks like this:
penb...@tiger:~/vm$ ./kvm bzImage
KVM exit reason: 8
Avi Kivity wrote:
These all look reasonable. Please add a gdtr dump and an idtr dump.
Done.
2b:*cb lret-- trapping instruction
Post the two u32s at ss:rsp - ss:rsp+8. That will tell us where the
guest is trying to return. Actually, from the dump:
1a:
Hi,
running Debian Squeeze with a 2.6.32-3-amd64 kernel and qemu-kvm 0.12.3
I enabled hugetlbfs on a rather small box with about five similar VMs
today (all Debian Squeeze amd64, but different services)
Pro:
* system load on the host has gone way down (by about 50%)
Contra:
* KSM seems to be
On 04/08/2010 09:59 PM, Pekka Enberg wrote:
2b:*cb lret-- trapping instruction
Post the two u32s at ss:rsp - ss:rsp+8. That will tell us where the
guest is trying to return. Actually, from the dump:
1a:6a 10pushq $0x10
1c:8d
I asked this question quite a while ago, it seems huge pages do not get scanned
for merging.
David Martin
- Bernhard Schmidt be...@birkenwald.de wrote:
Hi,
running Debian Squeeze with a 2.6.32-3-amd64 kernel and qemu-kvm
0.12.3
I enabled hugetlbfs on a rather small box with about
On Thu, Apr 08, 2010 at 06:19:35PM +0300, Avi Kivity wrote:
Currently we set eflags.vm unconditionally when entering real mode emulation
through virtual-8086 mode, and clear it unconditionally when we enter
protected
mode. The means that the following sequence
KVM_SET_REGS
Is there any way to disable this? I'm running a guest on -net user
networking, no interaction with the host network, yet, during the test,
I get tons of:
15:50:48 DEBUG| (address cache) Adding cache entry: 00:1a:64:39:04:91 ---
10.0.253.16
15:50:49 DEBUG| (address cache) Adding cache entry:
On 08/04/10 09:52, Andre Przywara wrote:
Can you try to boot the attached multiboot kernel, which just outputs
a brief CPUID dump?
$ qemu-kvm -kernel cpuid_mb -vnc :0
(Unfortunately I have no serial console support in there yet, so you
either have to write the values down or screenshot it).
Antoine Martin wrote:
On 03/08/2010 02:35 AM, Avi Kivity wrote:
On 03/07/2010 09:25 PM, Antoine Martin wrote:
On 03/08/2010 02:17 AM, Avi Kivity wrote:
On 03/07/2010 09:13 PM, Antoine Martin wrote:
What version of glibc do you have installed?
Latest stable:
sys-devel/gcc-4.3.4
On 08/04/10 08:23, Avi Kivity wrote:
Strange. Can you hack qemu-kvm's cpuid code where it issues the ioctl
KVM_SET_CPUID2 to show what the data is? I'm not where that code is in
your version of qemu-kvm.
Gad, the last time I tried to mess around with this sort of low level
code was many
On Mon, 2010-04-05 at 10:35 -0700, Sridhar Samudrala wrote:
On Sun, 2010-04-04 at 14:14 +0300, Michael S. Tsirkin wrote:
On Fri, Apr 02, 2010 at 10:31:20AM -0700, Sridhar Samudrala wrote:
Make vhost scalable by creating a separate vhost thread per vhost
device. This provides better
Here are the results with netperf TCP_STREAM 64K guest to host on a
8-cpu Nehalem system.
I presume you mean 8 core Nehalem-EP, or did you mean 8 processor Nehalem-EX?
Don't get me wrong, I *like* the netperf 64K TCP_STREAM test, I lik it a lot!-)
but I find it incomplete and also like to
On Tue, 6 Apr 2010 14:26:29 +0800
Xin, Xiaohui xiaohui@intel.com wrote:
How do you deal with the DoS problem of hostile user space app posting huge
number of receives and never getting anything.
That's a problem we are trying to deal with. It's critical for long term.
Currently, we
51 matches
Mail list logo