How hard would it be to implement virtio over vbus and perhaps the
virtio-net backend?
This would leave only one variable in the comparison, clear misconceptions and
make evaluation easier by judging each of vbus, venet etc separately on its own
merits.
The way things are now, it is unclear
It would be possible to use this technology in the KVM/Qemu project to
achieve better performance?
It could be a significative step for the develop in virtualization
technology?
Nothing is impossible, but it is at least not obvious how to pull
off such a trick.
Qemu/KVM is not embarrassingly
On Tue, May 5, 2009 at 11:37 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
If a set of drivers essentially implementing the virtio framework
(virtio_pci, virtio_ring, virtio queues) were available for
windows, that would be *really* neat.
I haven't tried them myself but I think this will give
- paravirtualized drivers widely available both for Linux and Windows
(Xen's drivers on windows can be hard and/or expensive to get)
Well, Xen has GPL PV drivers for windows (at least for networking)
which KVM doesn't have. There is a promise
but no date attached to it.
If a set of drivers
I 'd be happy with a simple comment explaining the 0x103f (e.g.,
/* Not yet using the full 0x1000 - 0x10ef to hedge our bets in case we
broke the ABI.*/
as explained above)
Thanks, I like your patch.
Where did this idea of experimental range come from, BTW?
In the qemu sources, there is a
And as a part of handle output for kick in the qemu side I am simply calling
virtio_notify
static void virtio_sample_handle_output(VirtIODevice *vdev, VirtQueue *vq)
{
printf(Function = %s, Line =
On Tue, Apr 28, 2009 at 6:19 PM, Gerd Hoffmann kra...@redhat.com wrote:
Hi,
Ok, since a day has passed with no further comments, I 'll dare to
assume everyone is happy with this solution. So, here is an
implementation. I 've tested locally with my driver that uses 0x10f5
and it seems to
On Mon, Apr 27, 2009 at 2:56 PM, Avi Kivity a...@redhat.com wrote:
Pantelis Koukousoulas wrote:
Or maybe
modprobe virtio-pci claim=0x10f2 claim=0x10f7
How about claim=0x10f2,0x10f7 instead so that it can be implemented as
a standard module array parameter?
Even better.
Ok, since
I'd suggest to exclude the experimental range by default (enable via module
parameter) to make clear it isn't for regular use.
Module parameter on what? The module parameter parsing code is afaict
provided by the end-driver (e.g., virtio-net) which only speaks virtio and has
no idea there is an
Or maybe
modprobe virtio-pci claim=0x10f2 claim=0x10f7
How about claim=0x10f2,0x10f7 instead so that it can be implemented as
a standard module array parameter?
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo
Please copy the virtio maintainer (Rusty Russell ru...@rustcorp.com.au) on
virtio guest patches.
Well, for now the issue is whether my understanding of qemu/pci-ids.txt and the
comment in virtio_pci.c that both say that the full 0x1000 - 0x10ff range of PCI
device IDs is donated for virtio_pci
On Mon, Apr 27, 2009 at 3:44 AM, Anthony Liguori anth...@codemonkey.ws wrote:
Rusty Russell wrote:
On Sun, 26 Apr 2009 10:19:16 pm Avi Kivity wrote:
0x1000-0x10ff is correct. I don't know where the 0x103f came from.
Rusty?
We decided to hedge our bets in case we broke the ABI.
AFAICT
According to the file pci-ids.txt in qemu sources, the range of PCI
device IDs assigned to virtio_pci is 0x1000 to 0x10ff, with a few
subranges that have different rules regarding who can get an ID
there and how.
Nevertheless, the full range should be assigned to the generic
virtio_pci driver, so
13 matches
Mail list logo