Right now, whenever we exit for an hcall, upon return, we fetch
some register values from the structure that carries the hcall
results and update the vcpu accordingly.
However, if the hypercall chooses instead to update the registers
itself by calling set_regs, then we end up clobbering those
On Fri, 13 Jul 2012 16:38:51 +0800, Asias He as...@redhat.com wrote:
This patch introduces bio-based IO path for virtio-blk.
Acked-by: Rusty Russell ru...@rustcorp.com.au
I just hope we can do better than a module option in future.
Thanks,
Rusty.
--
To unsubscribe from this list: send the line
On Thu, 26 Jul 2012 15:05:39 +0200, Paolo Bonzini pbonz...@redhat.com wrote:
Il 26/07/2012 09:58, Paolo Bonzini ha scritto:
Please CC me on the convert to sg copy-less patches, It looks
interesting
Sure.
Well, here is the gist of it (note it won't apply on any public tree,
hence
Il 27/07/2012 08:27, Rusty Russell ha scritto:
+int virtqueue_add_buf_sg(struct virtqueue *_vq,
+ struct scatterlist *sg_out,
+ unsigned int out,
+ struct scatterlist *sg_in,
+ unsigned int in,
+
Hello,
I managed to run this PCI (not PCIe(!)) device
06:07.0 Network controller [0280]: Ralink corp. RT2800 802.11n PCI [1814:0601]
Subsystem: Linksys Device [1737:0067]
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx-
On 07/27/12 00:22, Chris Clayton wrote:
On 07/26/12 13:07, Avi Kivity wrote:
On 07/26/2012 02:58 PM, Chris Clayton wrote:
It looks like general memory corruption. Is this repeatable? What's
the guest uptime when it happens (i.e. is it immediate?)
I've just done 10 runs of WinXP SP3 and 5
Il 05/07/2012 12:29, Jason Wang ha scritto:
Sometimes, virtio device need to configure irq affiniry hint to maximize the
performance. Instead of just exposing the irq of a virtqueue, this patch
introduce an API to set the affinity for a virtqueue.
The api is best-effort, the affinity hint
On 07/27/12 19:08, Eric Northup wrote:
Could you include the output of info registers at the point where it
crashed?
Here you go:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xb6a78b40 (LWP 13249)]
__strcmp_sse4_2 () at
On Fri, 2012-07-27 at 19:22 +, Blue Swirl wrote:
On Wed, Jul 25, 2012 at 5:03 PM, Alex Williamson
diff --git a/hw/vfio_pci.c b/hw/vfio_pci.c
new file mode 100644
index 000..e9ae421
--- /dev/null
+++ b/hw/vfio_pci.c
@@ -0,0 +1,2030 @@
+/*
+ * vfio based device assignment
Hello folks,
The RFC-v5 patch for tcm_vhost kernel code was sent out for review a bit
less than 24 hours ago, and thus far there has not been any additional
comments.. Thanks to everyone who has been participating in the various
threads over the past week and giving their feedback!
Also, just a
=
KVM Forum 2012: Call For Participation
November 7-9, 2012 - Hotel Fira Palace - Barcelona, Spain
(All submissions must be received before midnight Aug 31st, 2012)
=
Hi, Jens Rusty
This version is rebased against linux-next which resolves the conflict with
Paolo Bonzini's 'virtio-blk: allow toggling host cache between writeback and
writethrough' patch.
Patch 1/3 and 2/3 applies on linus's master as well. Since Rusty will pick up
patch 3/3 so the changes to
This patch introduces bio-based IO path for virtio-blk.
Compared to request-based IO path, bio-based IO path uses driver
provided -make_request_fn() method to bypasses the IO scheduler. It
handles the bio to device directly without allocating a request in block
layer. This reduces the IO path in
On 07/27/2012 08:33 AM, Rusty Russell wrote:
On Fri, 13 Jul 2012 16:38:51 +0800, Asias He as...@redhat.com wrote:
Add 'virtio_blk.use_bio=1' to kernel cmdline or 'modprobe virtio_blk
use_bio=1' to enable -make_request_fn() based I/O path.
This patch conflicts with Paolo's Bonzini's
On 28/07/12 05:22, Blue Swirl wrote:
On Wed, Jul 25, 2012 at 5:03 PM, Alex Williamson
+
+static void vfio_enable_intx_kvm(VFIODevice *vdev)
+{
+#ifdef CONFIG_KVM
These shouldn't be needed. The device will not be useful without KVM,
so the file shouldn't be compiled for non-KVM case at all.
Hi all,
i am comparing network throughput performance under bare-metal case
with that running VM with assigned-device (assigned NIC). i have two
physical machines (each has a 10Gbit NIC), one is used as remote
server (run netserver) and the other is used as the target tested one
(run netperf with
On Fri, 2012-07-27 at 22:09 -0500, sheng qiu wrote:
Hi all,
i am comparing network throughput performance under bare-metal case
with that running VM with assigned-device (assigned NIC). i have two
physical machines (each has a 10Gbit NIC), one is used as remote
server (run netserver) and
Right now, whenever we exit for an hcall, upon return, we fetch
some register values from the structure that carries the hcall
results and update the vcpu accordingly.
However, if the hypercall chooses instead to update the registers
itself by calling set_regs, then we end up clobbering those
Hi Kumar,
Can you please review this patch set.
Regards
Varun
-Original Message-
From: Sethi Varun-B16395
Sent: Monday, July 09, 2012 6:23 PM
To: ag...@suse.de; ga...@kernel.crashing.org; b...@kernel.crashing.org;
linuxppc-...@lists.ozlabs.org; kvm-ppc@vger.kernel.org
Cc: Sethi
19 matches
Mail list logo