RE: KVM_HYPERCALL

2009-05-18 Thread Kumar, Venkat
Hi Avi - Yes the control is not coming to neither  kvm_handle_exit  nor 
handle_vmcall after the hypercall is made from the guest.
If I am not wrong, the KVM_HYPERCALL instruction is expected to work, isn't 
it?

Thx,

Venkat


-Original Message-
From: Avi Kivity [mailto:a...@redhat.com] 
Sent: Monday, May 18, 2009 1:56 AM
To: Kumar, Venkat
Cc: kvm@vger.kernel.org
Subject: Re: KVM_HYPERCALL

Kumar, Venkat wrote:
 I am making a hypercall kvm_hypercall0 with number 0 from a Linux guest. 
 But I don't see the control coming to handle_vmcall or even 
 kvm_handle_exit. What could be the reason?
   

No idea.  kvm_handle_exit() is called very frequently, even without 
hypercalls.  Are you sure you don't see it called?

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v2] Shared memory device with interrupt support

2009-05-18 Thread Kumar, Venkat
Cam - I got your patch to work but without notifications. I could share memory 
using the patch but notifications aren't working.

I bring up two VM's with option -ivshmem shrmem,1024,/dev/shm/shrmem,server 
and -ivshmem shrmem,1024,/dev/shm/shrmem respectively.

When I make an ioctl from one of the VM's to inject an interrupt to the other 
VM, I get an error in qemu_chr_write and return value is -1. write call 
in send_all is failing with return value -1.

Am I missing something here?

Thx,

Venkat


-Original Message-
From: Cam Macdonell [mailto:c...@cs.ualberta.ca]
Sent: Saturday, May 16, 2009 9:01 AM
To: Kumar, Venkat
Cc: kvm@vger.kernel.org list
Subject: Re: [PATCH v2] Shared memory device with interrupt support


On 15-May-09, at 8:54 PM, Kumar, Venkat wrote:

 Cam,

 A questions on interrupts as well.
 What is unix:path that needs to be passed in the argument list?
 Can it be any string?

It has to be a valid path on the host.  It will create a unix domain
socket on that path.


 If my understanding is correct both the VM's who wants to
 communicate would gives this path in the command line with one of
 them specifying as server.

Exactly, the one with the server in the parameter list will wait for
a connection before booting.

Cam


 Thx,
 Venkat






Support an inter-vm shared memory device that maps a shared-
 memory object
 as a PCI device in the guest.  This patch also supports interrupts
 between
 guest by communicating over a unix domain socket.  This patch
 applies to the
 qemu-kvm repository.

 This device now creates a qemu character device and sends 1-bytes
 messages to
 trigger interrupts.  Writes are trigger by writing to the Doorbell
 register
 on the shared memory PCI device.  The lower 8-bits of the value
 written to this
 register are sent as the 1-byte message so different meanings of
 interrupts can
 be supported.

 Interrupts are only supported between 2 VMs currently.  One VM must
 act as the
 server by adding server to the command-line argument.  Shared
 memory devices
 are created with the following command-line:

 -ivhshmem shm object,size in MB,[unix:path][,server]

 Interrupts can also be used between host and guest as well by
 implementing a
 listener on the host.

 Cam

 ---
 Makefile.target |3 +
 hw/ivshmem.c|  421 ++
 +
 hw/pc.c |6 +
 hw/pc.h |3 +
 qemu-options.hx |   14 ++
 sysemu.h|8 +
 vl.c|   14 ++
 7 files changed, 469 insertions(+), 0 deletions(-)
 create mode 100644 hw/ivshmem.c

 diff --git a/Makefile.target b/Makefile.target
 index b68a689..3190bba 100644
 --- a/Makefile.target
 +++ b/Makefile.target
 @@ -643,6 +643,9 @@ OBJS += pcnet.o
 OBJS += rtl8139.o
 OBJS += e1000.o

 +# Inter-VM PCI shared memory
 +OBJS += ivshmem.o
 +
 # Generic watchdog support and some watchdog devices
 OBJS += watchdog.o
 OBJS += wdt_ib700.o wdt_i6300esb.o
 diff --git a/hw/ivshmem.c b/hw/ivshmem.c
 new file mode 100644
 index 000..95e2268
 --- /dev/null
 +++ b/hw/ivshmem.c
 @@ -0,0 +1,421 @@
 +/*
 + * Inter-VM Shared Memory PCI device.
 + *
 + * Author:
 + *  Cam Macdonell c...@cs.ualberta.ca
 + *
 + * Based On: cirrus_vga.c and rtl8139.c
 + *
 + * This code is licensed under the GNU GPL v2.
 + */
 +
 +#include hw.h
 +#include console.h
 +#include pc.h
 +#include pci.h
 +#include sysemu.h
 +
 +#include qemu-common.h
 +#include sys/mman.h
 +
 +#define PCI_COMMAND_IOACCESS0x0001
 +#define PCI_COMMAND_MEMACCESS   0x0002
 +#define PCI_COMMAND_BUSMASTER   0x0004
 +
 +//#define DEBUG_IVSHMEM
 +
 +#ifdef DEBUG_IVSHMEM
 +#define IVSHMEM_DPRINTF(fmt, args...)\
 +do {printf(IVSHMEM:  fmt, ##args); } while (0)
 +#else
 +#define IVSHMEM_DPRINTF(fmt, args...)
 +#endif
 +
 +typedef struct IVShmemState {
 +uint16_t intrmask;
 +uint16_t intrstatus;
 +uint16_t doorbell;
 +uint8_t *ivshmem_ptr;
 +unsigned long ivshmem_offset;
 +unsigned int ivshmem_size;
 +unsigned long bios_offset;
 +unsigned int bios_size;
 +target_phys_addr_t base_ctrl;
 +int it_shift;
 +PCIDevice *pci_dev;
 +CharDriverState * chr;
 +unsigned long map_addr;
 +unsigned long map_end;
 +int ivshmem_mmio_io_addr;
 +} IVShmemState;
 +
 +typedef struct PCI_IVShmemState {
 +PCIDevice dev;
 +IVShmemState ivshmem_state;
 +} PCI_IVShmemState;
 +
 +typedef struct IVShmemDesc {
 +char name[1024];
 +char * chrdev;
 +int size;
 +} IVShmemDesc;
 +
 +
 +/* registers for the Inter-VM shared memory device */
 +enum ivshmem_registers {
 +IntrMask = 0,
 +IntrStatus = 16,
 +Doorbell = 32
 +};
 +
 +static int num_ivshmem_devices = 0;
 +static IVShmemDesc ivshmem_desc;
 +
 +static void ivshmem_map(PCIDevice *pci_dev, int region_num,
 +uint32_t addr, uint32_t size, int type)
 +{
 +PCI_IVShmemState *d = (PCI_IVShmemState *)pci_dev;
 +IVShmemState *s = d

RE: KVM_HYPERCALL

2009-05-18 Thread Kumar, Venkat
Ok. With KVM-85 it works. I was using KVM-84 earlier.

Thx,

Venkat


-Original Message-
From: Avi Kivity [mailto:a...@redhat.com] 
Sent: Monday, May 18, 2009 5:03 PM
To: Kumar, Venkat
Cc: kvm@vger.kernel.org
Subject: Re: KVM_HYPERCALL

Kumar, Venkat wrote:
 Hi Avi - Yes the control is not coming to neither  kvm_handle_exit  nor 
 handle_vmcall after the hypercall is made from the guest.
 If I am not wrong, the KVM_HYPERCALL instruction is expected to work, isn't 
 it?
   

Yes, it should.  Are you sure the guest is executing this instruction?

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH v2] Shared memory device with interrupt support

2009-05-18 Thread Kumar, Venkat
I had tried all syntaxes other than this :).
Interrupts work now.

Thx,

Venkat

-Original Message-
From: Cam Macdonell [mailto:c...@cs.ualberta.ca]
Sent: Monday, May 18, 2009 9:51 PM
To: Kumar, Venkat
Cc: kvm@vger.kernel.org list
Subject: Re: [PATCH v2] Shared memory device with interrupt support

Kumar, Venkat wrote:
 Cam - I got your patch to work but without notifications. I could share 
 memory using the patch but notifications aren't working.

 I bring up two VM's with option -ivshmem shrmem,1024,/dev/shm/shrmem,server 
 and -ivshmem shrmem,1024,/dev/shm/shrmem respectively.

Ok, I guess I need to do more error checking of arguments :)  You need
to specify unix: on the path.  So your options should look like this

-ivshmem shrmem,1024,unix:/dev/shm/shrmem,server

-ivshmem shrmem,1024,unix:/dev/shm/shrmem

That should help.

Cam


 When I make an ioctl from one of the VM's to inject an interrupt to the 
 other VM, I get an error in qemu_chr_write and return value is -1. 
 write call in send_all is failing with return value -1.

 Am I missing something here?

 Thx,

 Venkat


 -Original Message-
 From: Cam Macdonell [mailto:c...@cs.ualberta.ca]
 Sent: Saturday, May 16, 2009 9:01 AM
 To: Kumar, Venkat
 Cc: kvm@vger.kernel.org list
 Subject: Re: [PATCH v2] Shared memory device with interrupt support


 On 15-May-09, at 8:54 PM, Kumar, Venkat wrote:

 Cam,

 A questions on interrupts as well.
 What is unix:path that needs to be passed in the argument list?
 Can it be any string?

 It has to be a valid path on the host.  It will create a unix domain
 socket on that path.

 If my understanding is correct both the VM's who wants to
 communicate would gives this path in the command line with one of
 them specifying as server.

 Exactly, the one with the server in the parameter list will wait for
 a connection before booting.

 Cam

 Thx,
 Venkat






Support an inter-vm shared memory device that maps a shared-
 memory object
 as a PCI device in the guest.  This patch also supports interrupts
 between
 guest by communicating over a unix domain socket.  This patch
 applies to the
 qemu-kvm repository.

 This device now creates a qemu character device and sends 1-bytes
 messages to
 trigger interrupts.  Writes are trigger by writing to the Doorbell
 register
 on the shared memory PCI device.  The lower 8-bits of the value
 written to this
 register are sent as the 1-byte message so different meanings of
 interrupts can
 be supported.

 Interrupts are only supported between 2 VMs currently.  One VM must
 act as the
 server by adding server to the command-line argument.  Shared
 memory devices
 are created with the following command-line:

 -ivhshmem shm object,size in MB,[unix:path][,server]

 Interrupts can also be used between host and guest as well by
 implementing a
 listener on the host.

 Cam

 ---
 Makefile.target |3 +
 hw/ivshmem.c|  421 ++
 +
 hw/pc.c |6 +
 hw/pc.h |3 +
 qemu-options.hx |   14 ++
 sysemu.h|8 +
 vl.c|   14 ++
 7 files changed, 469 insertions(+), 0 deletions(-)
 create mode 100644 hw/ivshmem.c

 diff --git a/Makefile.target b/Makefile.target
 index b68a689..3190bba 100644
 --- a/Makefile.target
 +++ b/Makefile.target
 @@ -643,6 +643,9 @@ OBJS += pcnet.o
 OBJS += rtl8139.o
 OBJS += e1000.o

 +# Inter-VM PCI shared memory
 +OBJS += ivshmem.o
 +
 # Generic watchdog support and some watchdog devices
 OBJS += watchdog.o
 OBJS += wdt_ib700.o wdt_i6300esb.o
 diff --git a/hw/ivshmem.c b/hw/ivshmem.c
 new file mode 100644
 index 000..95e2268
 --- /dev/null
 +++ b/hw/ivshmem.c
 @@ -0,0 +1,421 @@
 +/*
 + * Inter-VM Shared Memory PCI device.
 + *
 + * Author:
 + *  Cam Macdonell c...@cs.ualberta.ca
 + *
 + * Based On: cirrus_vga.c and rtl8139.c
 + *
 + * This code is licensed under the GNU GPL v2.
 + */
 +
 +#include hw.h
 +#include console.h
 +#include pc.h
 +#include pci.h
 +#include sysemu.h
 +
 +#include qemu-common.h
 +#include sys/mman.h
 +
 +#define PCI_COMMAND_IOACCESS0x0001
 +#define PCI_COMMAND_MEMACCESS   0x0002
 +#define PCI_COMMAND_BUSMASTER   0x0004
 +
 +//#define DEBUG_IVSHMEM
 +
 +#ifdef DEBUG_IVSHMEM
 +#define IVSHMEM_DPRINTF(fmt, args...)\
 +do {printf(IVSHMEM:  fmt, ##args); } while (0)
 +#else
 +#define IVSHMEM_DPRINTF(fmt, args...)
 +#endif
 +
 +typedef struct IVShmemState {
 +uint16_t intrmask;
 +uint16_t intrstatus;
 +uint16_t doorbell;
 +uint8_t *ivshmem_ptr;
 +unsigned long ivshmem_offset;
 +unsigned int ivshmem_size;
 +unsigned long bios_offset;
 +unsigned int bios_size;
 +target_phys_addr_t base_ctrl;
 +int it_shift;
 +PCIDevice *pci_dev;
 +CharDriverState * chr;
 +unsigned long map_addr;
 +unsigned long map_end;
 +int ivshmem_mmio_io_addr;
 +} IVShmemState;
 +
 +typedef struct PCI_IVShmemState {
 +PCIDevice dev

RE: [PATCH v2] Shared memory device with interrupt support

2009-05-15 Thread Kumar, Venkat
Hi Cam, I have gone through you latest shared memory patch.
I have a few questions and comments.

Comment:-
+if (ivshmem_enabled) {
+ivshmem_init(ivshmem_device);
+ram_size += ivshmem_get_size();
+}
+

In your initial patch this part of the patch is

+if (ivshmem_enabled) {
+ivshmem_init(ivshmem_device);
+phys_ram_size += ivshmem_get_size();
+}

I think the phys_ram_size += ivshmem_get_size(); is correct.

Question:-
You are giving the desired virtual address for mmaping the shared memory object 
as s-ivshmem_ptr which is phys_ram_base + s-ivshmem_offset. This desired 
virtual address is nothing but the base virtual address of the memory that you 
are allocating after incrementing phys_ram_size. So now s-ivshmem_ptr would 
point to a new set of memory, which is the shared memory region instead of 
memory allocated through qemu_alloc_physram, which means if pages are allocated 
for sh-ivshmem_ptr virtual address range then those pages can never be 
addressed again. Correct me if my understanding is wrong.

Thx,

Venkat


-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf Of 
Cam Macdonell
Sent: Thursday, May 07, 2009 9:47 PM
To: kvm@vger.kernel.org
Cc: Cam Macdonell
Subject: [PATCH v2] Shared memory device with interrupt support

Support an inter-vm shared memory device that maps a shared-memory object 
as a PCI device in the guest.  This patch also supports interrupts between 
guest by communicating over a unix domain socket.  This patch applies to the 
qemu-kvm repository.

This device now creates a qemu character device and sends 1-bytes messages to 
trigger interrupts.  Writes are trigger by writing to the Doorbell register 
on the shared memory PCI device.  The lower 8-bits of the value written to this 
register are sent as the 1-byte message so different meanings of interrupts can 
be supported.

Interrupts are only supported between 2 VMs currently.  One VM must act as the 
server by adding server to the command-line argument.  Shared memory devices 
are created with the following command-line:

-ivhshmem shm object,size in MB,[unix:path][,server]

Interrupts can also be used between host and guest as well by implementing a 
listener on the host.

Cam

---
 Makefile.target |3 +
 hw/ivshmem.c|  421 +++
 hw/pc.c |6 +
 hw/pc.h |3 +
 qemu-options.hx |   14 ++
 sysemu.h|8 +
 vl.c|   14 ++
 7 files changed, 469 insertions(+), 0 deletions(-)
 create mode 100644 hw/ivshmem.c

diff --git a/Makefile.target b/Makefile.target
index b68a689..3190bba 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -643,6 +643,9 @@ OBJS += pcnet.o
 OBJS += rtl8139.o
 OBJS += e1000.o

+# Inter-VM PCI shared memory
+OBJS += ivshmem.o
+
 # Generic watchdog support and some watchdog devices
 OBJS += watchdog.o
 OBJS += wdt_ib700.o wdt_i6300esb.o
diff --git a/hw/ivshmem.c b/hw/ivshmem.c
new file mode 100644
index 000..95e2268
--- /dev/null
+++ b/hw/ivshmem.c
@@ -0,0 +1,421 @@
+/*
+ * Inter-VM Shared Memory PCI device.
+ *
+ * Author:
+ *  Cam Macdonell c...@cs.ualberta.ca
+ *
+ * Based On: cirrus_vga.c and rtl8139.c
+ *
+ * This code is licensed under the GNU GPL v2.
+ */
+
+#include hw.h
+#include console.h
+#include pc.h
+#include pci.h
+#include sysemu.h
+
+#include qemu-common.h
+#include sys/mman.h
+
+#define PCI_COMMAND_IOACCESS0x0001
+#define PCI_COMMAND_MEMACCESS   0x0002
+#define PCI_COMMAND_BUSMASTER   0x0004
+
+//#define DEBUG_IVSHMEM
+
+#ifdef DEBUG_IVSHMEM
+#define IVSHMEM_DPRINTF(fmt, args...)\
+do {printf(IVSHMEM:  fmt, ##args); } while (0)
+#else
+#define IVSHMEM_DPRINTF(fmt, args...)
+#endif
+
+typedef struct IVShmemState {
+uint16_t intrmask;
+uint16_t intrstatus;
+uint16_t doorbell;
+uint8_t *ivshmem_ptr;
+unsigned long ivshmem_offset;
+unsigned int ivshmem_size;
+unsigned long bios_offset;
+unsigned int bios_size;
+target_phys_addr_t base_ctrl;
+int it_shift;
+PCIDevice *pci_dev;
+CharDriverState * chr;
+unsigned long map_addr;
+unsigned long map_end;
+int ivshmem_mmio_io_addr;
+} IVShmemState;
+
+typedef struct PCI_IVShmemState {
+PCIDevice dev;
+IVShmemState ivshmem_state;
+} PCI_IVShmemState;
+
+typedef struct IVShmemDesc {
+char name[1024];
+char * chrdev;
+int size;
+} IVShmemDesc;
+
+
+/* registers for the Inter-VM shared memory device */
+enum ivshmem_registers {
+IntrMask = 0,
+IntrStatus = 16,
+Doorbell = 32
+};
+
+static int num_ivshmem_devices = 0;
+static IVShmemDesc ivshmem_desc;
+
+static void ivshmem_map(PCIDevice *pci_dev, int region_num,
+uint32_t addr, uint32_t size, int type)
+{
+PCI_IVShmemState *d = (PCI_IVShmemState *)pci_dev;
+IVShmemState *s = d-ivshmem_state;
+
+IVSHMEM_DPRINTF(addr = %u size = 

KVM_HYPERCALL

2009-05-13 Thread Kumar, Venkat
I am making a hypercall kvm_hypercall0 with number 0 from a Linux guest. But 
I don't see the control coming to handle_vmcall or even kvm_handle_exit. 
What could be the reason?

Thx,

Venkat


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Allocating Extra Memory To Guest

2009-05-06 Thread Kumar, Venkat
Hi,

1. How should we allocate extra memory to guest other than memory allocated 
through qemu_alloc_physram??
2. How to register the extra allocated memory with KVM?

I have tried to allocate an extra one page to Guest but couldn't succeed; 
probably somebody had already done this exercise.

1. In Qemu/vl.c, I allocate a page for guest
a. virt_addr = mmap(NULL, size, PROT_READ | PROT_WRITE,MAP_PRIVATE |
MAP_ANON, -1, 0);

2. In machine-init
a. ram_addr = qemu_ram_alloc(PAGE_SIZE);
b.cpu_register_physical_memory(below_4g_mem_size,PAGE_SIZE,ram_addr+of  
fset); (offset is the difference between the qemu's virtual address got 
from mmap and qemu's last virtual address allocated through 
qemu_alloc_physram

3. Increase phys_ram_size and ram_size by one page.

Did I miss something here?

Thx,

Venkat


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


FW: Notification from Qemu to Guest

2009-04-28 Thread Kumar, Venkat
Hi Avi - Probably you can answer this question?

Thx,

Venkat


-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf Of 
Kumar, Venkat
Sent: Tuesday, April 28, 2009 4:16 PM
To: kvm@vger.kernel.org
Subject: Notification from Qemu to Guest

I have emulated a PCI device on Qemu and hooked my sample/simple driver to that 
virtio device on the guest.

I am testing the notification from Guest-Qemu and vice-versa.

I am able to notify from Guest to Qemu but Qemu-Guest notification is not 
happening.

As a part of Kick routine in my guest driver I could see the notification 
happening from Guest-Qemu and In the Qemu process as a part of handle output 
for the emulated device I am simply doing  virtio_notify(vdev, vq) but I 
don't see my callback getting called which is already registered as a part of 
find_vq in guest driver's probe.

BTW, the emulated device is allocated with GSI 11 where as for other emulated 
devices like virtio-blk is associated with GSI 10 which I found in dmesg's. 
Is this a reason why interrupt is not delivered from Qemu-Guest?

Any clues?

Thx,

Venkat

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Notification from Qemu to Guest

2009-04-28 Thread Kumar, Venkat
I have emulated a PCI device on Qemu and hooked my sample/simple driver to that 
virtio device on the guest.

I am testing the notification from Guest-Qemu and vice-versa.

I am able to notify from Guest to Qemu but Qemu-Guest notification is not 
happening.

As a part of Kick routine in my guest driver I could see the notification 
happening from Guest-Qemu and In the Qemu process as a part of handle output 
for the emulated device I am simply doing  virtio_notify(vdev, vq) but I 
don't see my callback getting called which is already registered as a part of 
find_vq in guest driver's probe.

BTW, the emulated device is allocated with GSI 11 where as for other emulated 
devices like virtio-blk is associated with GSI 10 which I found in dmesg's. 
Is this a reason why interrupt is not delivered from Qemu-Guest?

Any clues?

Thx,

Venkat

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: FW: Notification from Qemu to Guest

2009-04-28 Thread Kumar, Venkat

Hi Anthony - My questions and Comments are in line.

Thx,

Venkat

-Original Message-
From: Anthony Liguori [mailto:anth...@codemonkey.ws] 
Sent: Tuesday, April 28, 2009 8:41 PM
To: Avi Kivity
Cc: Kumar, Venkat; kvm@vger.kernel.org
Subject: Re: FW: Notification from Qemu to Guest

Avi Kivity wrote:
 I have emulated a PCI device on Qemu and hooked my sample/simple 
 driver to that virtio device on the guest.

This is independent of the existing virtio PCI device?

== No it is the same as the existing virtio PCI device. I am reusing the 
virtio-blk model. In qemu/hw/pc.c, I am spawning an emulated PCI device like 
this virtio_sample_init(pci_bus,drives_table[25].bdrv);. I haven't explored 
the blockdrivestate parameter (second parameter) and hardcoded it to 25 
because I don't want to associate the device with an image file on the disk 
rather I just want use the device for communication. I am manually filling the 
25th indexed blockdriverstate in this way.
BlockDriverState *bdrv_temp;
int idx_tmp;

bdrv_temp =  bdrv_new(SAMPLE Disk Device);
drives_table[25].bdrv = bdrv_temp;
drives_table[25].type = type;
drives_table[25].bus = bus_id;
drives_table[25].unit = unit_id;
drives_table[25].onerror = onerror;
nb_drives++;
in qemu/vl.c

 As a part of Kick routine in my guest driver I could see the 
 notification happening from Guest-Qemu and In the Qemu process as a 
 part of handle output for the emulated device I am simply doing  
 virtio_notify(vdev, vq) but I don't see my callback getting called 
 which is already registered as a part of find_vq in guest driver's 
 probe.
   

 You need to enable notifications, not sure how exactly.

By default, if you zeroed the memory for the ring, notifications are 
enabled.  You have to set a bit to disable notifications.  It sounds 
like you aren't properly injecting the IRQ which is hard to assess 
without more detail about what the particular device you've added to QEMU.

Are you reusing the existing virtio PCI infrastructure in QEMU?

== I am using the existing vp_find_vq to allocate virtio queues and rings so I 
assume this function zeroes the ring memory.
As a part of probe in Guest virtio sample driver, I am calling find_vq and kick 
immediately to test notifications.

static int virtsample_probe(struct virtio_device *vdev)
{

struct virtio_sample *vsample;
int err;

printk(Virtio SAMPLE probe is called \n);

if (index_to_minor(index) = 1  MINORBITS)
return -ENOSPC;

vdev-priv = vsample = kmalloc(sizeof(*vsample), GFP_KERNEL);

if (!vsample) {
err = -ENOMEM;
goto out;
}

INIT_LIST_HEAD(vsample-reqs);
spin_lock_init(vsample-lock);
vsample-vdev = vdev;

/* We expect one virtqueue, for output. */
vsample-vq = vdev-config-find_vq(vdev, 0, sample_done);
if (IS_ERR(vsample-vq)) {
err = PTR_ERR(vsample-vq);
goto out_free_vsample;
}


vsample-vq-vq_ops-kick(vsample-vq);
return 0;

out_free_vsample:
kfree(vsample);
printk(Failed in output_free_sample\n);
out:
return err;
}

And as a part of handle output for kick in the qemu side I am simply calling 
virtio_notify
static void virtio_sample_handle_output(VirtIODevice *vdev, VirtQueue *vq)
{
printf(Function = %s, Line = %d\n,__FUNCTION__,__LINE__);
virtio_notify(vdev, vq);
}

Kick is working fine as I am landing in Qemu when making that call, However 
virtio_notify is not resulting in my callback invocation registered as part 
find_vq.
Do you see any missing parts here?

 BTW, the emulated device is allocated with GSI 11 where as for 
 other emulated devices like virtio-blk is associated with GSI 10 
 which I found in dmesg's. Is this a reason why interrupt is not 
 delivered from Qemu-Guest?
   

 Interrupts for PCI devices are assigned based on the slots where they 
 sit. Both GSI 10 and GSI 11 are PCI link interrupts.

virtio-pci always uses LNK A.  How it gets mapped to GSI depends on the 
slot as Avi mentioned.

Regards,

Anthony Liguori
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Virtio Queries

2009-04-09 Thread Kumar, Venkat
Hi, I am a few questions on Virtio, Hope somebody clarifies them

1. What is the address type that is put inside the vring descriptor in the virt 
queue while placing a request? Is it the guest virtual address or guest 
physical address?
2. If it is Guest physical address, how qemu converts it to its virtual address 
before processing the buffer?
3. How does qemu interact with guest once it receives the buffers from the virt 
queue for doing the IO?

Thx,

Venkat


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mapping a virtual block device on the guest

2009-04-07 Thread Kumar, Venkat
The reason I ask this question is because that I don't have any virtio-pci and 
virtio-blk modules loaded in my guest but still I could access the virtual 
block device.

Thx,

Venkat


-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf Of 
Kumar, Venkat
Sent: Tuesday, April 07, 2009 3:52 PM
To: kvm@vger.kernel.org
Subject: Mapping a virtual block device on the guest

I understand that the option -drive file=/dev/path/to/host/device,if=virtio 
would map the device given in the file parameter as a virtual block device on 
the guest. Can somebody explain me the roles of virtio-pci and virtio-blk 
modules on the guest in accessing the virtual block device. I am unable to 
understand the design.

Thx,

Venkat


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Qemu process in Guest

2009-04-02 Thread Kumar, Venkat
1. How does Qemu process start running in Guest?
2. How does a guest's I/O request get trapped into the user mode qemu process?

Thx,

Venkat


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Inter VM Communication

2009-03-24 Thread Kumar, Venkat

Just like how Xen has Xenbus, Emulated Platform-PCI device and Events for Inter 
VM communication, Does KVM has any mechanism for Inter VM communication? 
How to share a page between two virtual machines running on KVM?

Thx,
Venkat

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html