On 08/26/2011 01:03 AM, David Evensky wrote:
I need to specify the physical address because I need to ioremap the
memory during boot.
Did you consider pci_ioremap_bar()?
The production issue I think is a memory limitation. We certainly do
use QEMU a lot; but for this the kvm tool is a better
On Sun, Aug 28, 2011 at 10:34:45AM +0300, Avi Kivity wrote:
On 08/26/2011 01:03 AM, David Evensky wrote:
I need to specify the physical address because I need to ioremap the
memory during boot.
Did you consider pci_ioremap_bar()?
No, the code needs a physical memory address, not a PCI
On Thu, 2011-08-25 at 16:35 -0500, Anthony Liguori wrote:
On 08/24/2011 05:25 PM, David Evensky wrote:
This patch adds a PCI device that provides PCI device memory to the
guest. This memory in the guest exists as a shared memory segment in
the host. This is similar memory sharing
On Thu, 2011-08-25 at 08:08 -0700, David Evensky wrote:
Adding in the rest of what ivshmem does shouldn't affect our use, *I
think*. I hadn't intended this to do everything that ivshmem does,
but I can see how that would be useful. It would be cool if it could
grow into that.
David,
I've
Sasha,
That is wonderful. It sounds like it should be OK, and will be happy
to test.
\dae
On Fri, Aug 26, 2011 at 09:33:58AM +0300, Sasha Levin wrote:
On Thu, 2011-08-25 at 08:08 -0700, David Evensky wrote:
Adding in the rest of what ivshmem does shouldn't affect our use, *I
think*. I
I don't know if there is a PCI card that only provides a region
of memory. I'm not really trying to provide emulation for a known
piece of hardware, so I picked values that weren't being used since
there didn't appear to be an 'unknown'. I'll ask around.
\dae
On Thu, Aug 25, 2011 at 08:41:43AM
On Thu, Aug 25, 2011 at 1:25 AM, David Evensky even...@sandia.gov wrote:
+ if (*next == '\0')
+ p = next;
+ else
+ p = next + 1;
+ /* parse out size */
+ base = 10;
+ if (strcasestr(p, 0x))
+ base = 16;
+ size =
On Thu, Aug 25, 2011 at 09:02:56AM +0300, Pekka Enberg wrote:
On Thu, Aug 25, 2011 at 1:25 AM, David Evensky even...@sandia.gov wrote:
+ ? ? ? if (*next == '\0')
+ ? ? ? ? ? ? ? p = next;
+ ? ? ? else
+ ? ? ? ? ? ? ? p = next + 1;
+ ? ? ? /* parse out size */
+ ? ? ? base = 10;
+ ?
On Thu, Aug 25, 2011 at 1:54 PM, Pekka Enberg penb...@kernel.org wrote:
On 8/25/11 8:34 AM, Asias He wrote:
Hi, David
On Thu, Aug 25, 2011 at 6:25 AM, David Evensky even...@sandia.gov wrote:
This patch adds a PCI device that provides PCI device memory to the
guest. This memory in the
On 8/25/11 9:30 AM, Asias He wrote:
On Thu, Aug 25, 2011 at 1:54 PM, Pekka Enbergpenb...@kernel.org wrote:
On 8/25/11 8:34 AM, Asias He wrote:
Hi, David
On Thu, Aug 25, 2011 at 6:25 AM, David Evenskyeven...@sandia.gov wrote:
This patch adds a PCI device that provides PCI device memory to
On Thu, Aug 25, 2011 at 3:02 PM, Pekka Enberg penb...@kernel.org wrote:
On 8/25/11 9:30 AM, Asias He wrote:
On Thu, Aug 25, 2011 at 1:54 PM, Pekka Enbergpenb...@kernel.org wrote:
On 8/25/11 8:34 AM, Asias He wrote:
Hi, David
On Thu, Aug 25, 2011 at 6:25 AM, David
On 8/25/11 10:20 AM, Asias He wrote:
On Thu, Aug 25, 2011 at 3:02 PM, Pekka Enbergpenb...@kernel.org wrote:
On 8/25/11 9:30 AM, Asias He wrote:
On Thu, Aug 25, 2011 at 1:54 PM, Pekka Enbergpenb...@kernel.orgwrote:
On 8/25/11 8:34 AM, Asias He wrote:
Hi, David
On Thu, Aug 25, 2011 at
On Thu, Aug 25, 2011 at 6:06 AM, Pekka Enberg penb...@kernel.org wrote:
On Wed, 2011-08-24 at 21:49 -0700, David Evensky wrote:
On Wed, Aug 24, 2011 at 10:27:18PM -0500, Alexander Graf wrote:
On 24.08.2011, at 17:25, David Evensky wrote:
This patch adds a PCI device that provides
Hi Stefan,
On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
It's obviously not competing. One thing you might want to consider is
making the guest interface compatible with ivshmem. Is there any reason
we shouldn't do that? I don't consider that a requirement, just
On Thu, Aug 25, 2011 at 11:37 AM, Pekka Enberg penb...@kernel.org wrote:
Hi Stefan,
On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
It's obviously not competing. One thing you might want to consider is
making the guest interface compatible with ivshmem. Is there any
On Thu, Aug 25, 2011 at 1:59 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
Introducing yet another non-standard and non-Linux interface doesn't
help though. If there is no significant improvement over ivshmem then
it makes sense to let ivshmem gain critical mass and more users
instead of
On Thu, 2011-08-25 at 11:59 +0100, Stefan Hajnoczi wrote:
On Thu, Aug 25, 2011 at 11:37 AM, Pekka Enberg penb...@kernel.org wrote:
Hi Stefan,
On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi stefa...@gmail.com wrote:
It's obviously not competing. One thing you might want to consider is
On 08/25/2011 02:15 PM, Pekka Enberg wrote:
On Thu, Aug 25, 2011 at 1:59 PM, Stefan Hajnoczistefa...@gmail.com wrote:
Introducing yet another non-standard and non-Linux interface doesn't
help though. If there is no significant improvement over ivshmem then
it makes sense to let ivshmem
On Thu, Aug 25, 2011 at 2:30 PM, Avi Kivity a...@redhat.com wrote:
On 08/25/2011 02:15 PM, Pekka Enberg wrote:
On Thu, Aug 25, 2011 at 1:59 PM, Stefan Hajnoczistefa...@gmail.com
wrote:
Introducing yet another non-standard and non-Linux interface doesn't
help though. If there is no
On Thu, 2011-08-25 at 14:30 +0300, Avi Kivity wrote:
On 08/25/2011 02:15 PM, Pekka Enberg wrote:
On Thu, Aug 25, 2011 at 1:59 PM, Stefan Hajnoczistefa...@gmail.com wrote:
Introducing yet another non-standard and non-Linux interface doesn't
help though. If there is no significant
On 08/25/2011 02:38 PM, Pekka Enberg wrote:
If you or other KVM folks want to have a say what goes into tools/kvm,
I'm happy to send you a pull request against kvm.git.
Thanks, but I have my hands full already. I'll stop offering unwanted
advice as well.
Anyway, Sasha thinks ivshmem is
On 08/25/2011 02:38 PM, Pekka Enberg wrote:
If you or other KVM folks want to have a say what goes into tools/kvm,
I'm happy to send you a pull request against kvm.git.
On Thu, Aug 25, 2011 at 2:51 PM, Avi Kivity a...@redhat.com wrote:
Thanks, but I have my hands full already. I'll stop
Adding in the rest of what ivshmem does shouldn't affect our use, *I
think*. I hadn't intended this to do everything that ivshmem does,
but I can see how that would be useful. It would be cool if it could
grow into that.
Our requirements for the driver in kvm tool are that another program
on
I've tested ivshmem with the latest git pull (had minor trouble
building on debian sid, vnc and unused var, but trivial to work
around).
QEMU's -device ivshmem,size=16,shm=/kvm_shmem
seems to function as my proposed
--shmem pci:0xfd00:16M:handle=/kvm_shmem
except that I can't
On 08/26/2011 12:00 AM, David Evensky wrote:
I've tested ivshmem with the latest git pull (had minor trouble
building on debian sid, vnc and unused var, but trivial to work
around).
QEMU's -device ivshmem,size=16,shm=/kvm_shmem
seems to function as my proposed
--shmem
On 08/24/2011 05:25 PM, David Evensky wrote:
This patch adds a PCI device that provides PCI device memory to the
guest. This memory in the guest exists as a shared memory segment in
the host. This is similar memory sharing capability of Nahanni
(ivshmem) available in QEMU. In this case, the
On Thu, Aug 25, 2011 at 04:35:29PM -0500, Anthony Liguori wrote:
dev.h
--- linux-kvm/tools/kvm/include/kvm/virtio-pci-dev.h 2011-08-09
15:38:48.760120973 -0700
+++ linux-kvm_pci_shmem/tools/kvm/include/kvm/virtio-pci-dev.h
2011-08-18 10:06:12.171539230 -0700
@@ -15,10 +15,13 @@
I need to specify the physical address because I need to ioremap the
memory during boot.
The production issue I think is a memory limitation. We certainly do
use QEMU a lot; but for this the kvm tool is a better fit.
\dae
On Fri, Aug 26, 2011 at 12:11:03AM +0300, Avi Kivity wrote:
On
Just FYI, one issue that I found with exposing host memory regions as
a PCI BAR (including via a very old version of the ivshmem driver...
haven't tried a newer one) is that x86's pci_mmap_page_range doesn't
want to set up a write-back cacheable mapping of a BAR.
It may not matter for your
Thanks. My initial version did use the E820 map (thus the reason I
want to have an 'address family'), but it was suggested that PCI would
be a better way to go. When I get the rest of the project going, I
will certainly test against that. I am going to have to do a LOT of
ioremap's so that might
This patch adds a PCI device that provides PCI device memory to the
guest. This memory in the guest exists as a shared memory segment in
the host. This is similar memory sharing capability of Nahanni
(ivshmem) available in QEMU. In this case, the shared memory segment
is exposed as a PCI BAR
On 24.08.2011, at 17:25, David Evensky wrote:
This patch adds a PCI device that provides PCI device memory to the
guest. This memory in the guest exists as a shared memory segment in
the host. This is similar memory sharing capability of Nahanni
(ivshmem) available in QEMU. In this case,
On Wed, Aug 24, 2011 at 10:27:18PM -0500, Alexander Graf wrote:
On 24.08.2011, at 17:25, David Evensky wrote:
This patch adds a PCI device that provides PCI device memory to the
guest. This memory in the guest exists as a shared memory segment in
the host. This is similar memory
On 24.08.2011, at 23:49, David Evensky wrote:
On Wed, Aug 24, 2011 at 10:27:18PM -0500, Alexander Graf wrote:
On 24.08.2011, at 17:25, David Evensky wrote:
This patch adds a PCI device that provides PCI device memory to the
guest. This memory in the guest exists as a shared memory
On Wed, 2011-08-24 at 21:49 -0700, David Evensky wrote:
On Wed, Aug 24, 2011 at 10:27:18PM -0500, Alexander Graf wrote:
On 24.08.2011, at 17:25, David Evensky wrote:
This patch adds a PCI device that provides PCI device memory to the
guest. This memory in the guest exists as
On Wed, 2011-08-24 at 23:52 -0500, Alexander Graf wrote:
Isn't ivshmem in QEMU? If so, then I don't think there isn't any
competition. How do you feel that these are competing?
Well, it means that you will inside the guest have two different
devices depending whether you're using QEMU or
On 8/25/11 8:22 AM, Alexander Graf wrote:
On 25.08.2011, at 00:11, Pekka Enberg wrote:
On Wed, 2011-08-24 at 23:52 -0500, Alexander Graf wrote:
Isn't ivshmem in QEMU? If so, then I don't think there isn't any
competition. How do you feel that these are competing?
Well, it means that you
On 25.08.2011, at 00:37, Pekka Enberg wrote:
On 8/25/11 8:22 AM, Alexander Graf wrote:
On 25.08.2011, at 00:11, Pekka Enberg wrote:
On Wed, 2011-08-24 at 23:52 -0500, Alexander Graf wrote:
Isn't ivshmem in QEMU? If so, then I don't think there isn't any
competition. How do you feel that
On 08/25/2011 01:25 AM, David Evensky wrote:
#define PCI_DEVICE_ID_VIRTIO_BLN 0x1005
#define PCI_DEVICE_ID_VIRTIO_P9 0x1009
#define PCI_DEVICE_ID_VESA0x2000
+#define PCI_DEVICE_ID_PCI_SHMEM0x0001
#define
On Thu, Aug 25, 2011 at 08:06:34AM +0300, Pekka Enberg wrote:
On Wed, 2011-08-24 at 21:49 -0700, David Evensky wrote:
On Wed, Aug 24, 2011 at 10:27:18PM -0500, Alexander Graf wrote:
On 24.08.2011, at 17:25, David Evensky wrote:
This patch adds a PCI device that provides
40 matches
Mail list logo