On 03/10/2010 07:41 PM, Paul Brook wrote:
You're much better off using a bulk-data transfer API that relaxes
coherency requirements. IOW, shared memory doesn't make sense for TCG
Rather, tcg doesn't make sense for shared memory smp. But we knew that
already.
In think TCG SMP is a
On Thursday 11 March 2010, Avi Kivity wrote:
A totally different option that avoids this whole problem would
be to separate the signalling from the shared memory, making the
PCI shared memory device a trivial device with a single memory BAR,
and using something a higher-level concept like
On 03/11/2010 02:57 PM, Arnd Bergmann wrote:
On Thursday 11 March 2010, Avi Kivity wrote:
A totally different option that avoids this whole problem would
be to separate the signalling from the shared memory, making the
PCI shared memory device a trivial device with a single memory BAR,
and
On Thursday 11 March 2010, Avi Kivity wrote:
That would be much slower. The current scheme allows for an
ioeventfd/irqfd short circuit which allows one guest to interrupt
another without involving their qemus at all.
Yes, the serial line approach would be much slower, but my point
On Thu, 11 Mar 2010, Nick Piggin wrote:
On Thu, Mar 11, 2010 at 03:10:47AM +, Jamie Lokier wrote:
Paul Brook wrote:
In a cross environment that becomes extremely hairy. For example the
x86
architecture effectively has an implicit write barrier before every
store, and
On 03/09/2010 08:34 PM, Cam Macdonell wrote:
On Tue, Mar 9, 2010 at 10:28 AM, Avi Kivitya...@redhat.com wrote:
On 03/09/2010 05:27 PM, Cam Macdonell wrote:
Registers are used
for synchronization between guests sharing the same memory object when
interrupts are
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using mmio
callbacks instead of directly, and issue the appropriate barriers there.
Not good enough unless you want to severely restrict the use of shared
memory within the guest.
For
On 03/10/2010 06:38 AM, Cam Macdonell wrote:
On Tue, Mar 9, 2010 at 5:03 PM, Paul Brookp...@codesourcery.com wrote:
In a cross environment that becomes extremely hairy. For example the x86
architecture effectively has an implicit write barrier before every
store, and an implicit read
As of March 2009[1] Intel guarantees that memory reads occur in order
(they may only be reordered relative to writes). It appears AMD do not
provide this guarantee, which could be an interesting problem for
heterogeneous migration..
Interesting, but what ordering would cause problems
On Tuesday 09 March 2010, Cam Macdonell wrote:
We could make the masking in RAM, not in registers, like virtio, which would
require no exits. It would then be part of the application specific
protocol and out of scope of of this spec.
This kind of implementation would be possible now
On Wed, Mar 10, 2010 at 2:21 AM, Avi Kivity a...@redhat.com wrote:
On 03/09/2010 08:34 PM, Cam Macdonell wrote:
On Tue, Mar 9, 2010 at 10:28 AM, Avi Kivitya...@redhat.com wrote:
On 03/09/2010 05:27 PM, Cam Macdonell wrote:
Registers are used
for synchronization between guests
On 03/10/2010 03:25 AM, Avi Kivity wrote:
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using
mmio callbacks instead of directly, and issue the appropriate
barriers there.
Not good enough unless you want to severely restrict the
On 03/10/2010 07:13 PM, Anthony Liguori wrote:
On 03/10/2010 03:25 AM, Avi Kivity wrote:
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using
mmio callbacks instead of directly, and issue the appropriate
barriers there.
Not good
You're much better off using a bulk-data transfer API that relaxes
coherency requirements. IOW, shared memory doesn't make sense for TCG
Rather, tcg doesn't make sense for shared memory smp. But we knew that
already.
In think TCG SMP is a hard, but soluble problem, especially when
Paul Brook wrote:
In a cross environment that becomes extremely hairy. For example the x86
architecture effectively has an implicit write barrier before every
store, and an implicit read barrier before every load.
Btw, x86 doesn't have any implicit barriers due to ordinary loads.
On Thu, Mar 11, 2010 at 03:10:47AM +, Jamie Lokier wrote:
Paul Brook wrote:
In a cross environment that becomes extremely hairy. For example the
x86
architecture effectively has an implicit write barrier before every
store, and an implicit read barrier before every load.
On 03/10/2010 07:41 PM, Paul Brook wrote:
You're much better off using a bulk-data transfer API that relaxes
coherency requirements. IOW, shared memory doesn't make sense for TCG
Rather, tcg doesn't make sense for shared memory smp. But we knew that
already.
In think TCG SMP is
On 03/10/2010 06:36 PM, Cam Macdonell wrote:
On Wed, Mar 10, 2010 at 2:21 AM, Avi Kivitya...@redhat.com wrote:
On 03/09/2010 08:34 PM, Cam Macdonell wrote:
On Tue, Mar 9, 2010 at 10:28 AM, Avi Kivitya...@redhat.comwrote:
On 03/09/2010 05:27 PM, Cam Macdonell wrote:
On 03/10/2010 04:04 PM, Arnd Bergmann wrote:
On Tuesday 09 March 2010, Cam Macdonell wrote:
We could make the masking in RAM, not in registers, like virtio, which would
require no exits. It would then be part of the application specific
protocol and out of scope of of this spec.
On Monday 08 March 2010, Cam Macdonell wrote:
enum ivshmem_registers {
IntrMask = 0,
IntrStatus = 2,
Doorbell = 4,
IVPosition = 6,
IVLiveList = 8
};
The first two registers are the interrupt mask and status registers.
Interrupts are triggered when a message is
On 03/08/2010 07:57 PM, Cam Macdonell wrote:
Can you provide a spec that describes the device? This would be useful for
maintaining the code, writing guest drivers, and as a framework for review.
I'm not sure if you want the Qemu command-line part as part of the
spec here, but I've
On Tue, Mar 9, 2010 at 3:29 AM, Avi Kivity a...@redhat.com wrote:
On 03/08/2010 07:57 PM, Cam Macdonell wrote:
Can you provide a spec that describes the device? This would be useful
for
maintaining the code, writing guest drivers, and as a framework for
review.
I'm not sure if you want
On 03/09/2010 02:49 PM, Arnd Bergmann wrote:
On Monday 08 March 2010, Cam Macdonell wrote:
enum ivshmem_registers {
IntrMask = 0,
IntrStatus = 2,
Doorbell = 4,
IVPosition = 6,
IVLiveList = 8
};
The first two registers are the interrupt mask and status registers.
On Tue, Mar 9, 2010 at 6:03 AM, Avi Kivity a...@redhat.com wrote:
On 03/09/2010 02:49 PM, Arnd Bergmann wrote:
On Monday 08 March 2010, Cam Macdonell wrote:
enum ivshmem_registers {
IntrMask = 0,
IntrStatus = 2,
Doorbell = 4,
IVPosition = 6,
IVLiveList = 8
};
The
On 03/09/2010 05:27 PM, Cam Macdonell wrote:
Registers are used
for synchronization between guests sharing the same memory object when
interrupts are supported (this requires using the shared memory server).
How does the driver detect whether interrupts are supported or not?
On 03/09/2010 11:28 AM, Avi Kivity wrote:
On 03/09/2010 05:27 PM, Cam Macdonell wrote:
Registers are used
for synchronization between guests sharing the same memory object when
interrupts are supported (this requires using the shared memory
server).
How does the driver detect whether
On Tue, Mar 9, 2010 at 10:28 AM, Avi Kivity a...@redhat.com wrote:
On 03/09/2010 05:27 PM, Cam Macdonell wrote:
Registers are used
for synchronization between guests sharing the same memory object when
interrupts are supported (this requires using the shared memory server).
How does the
Paul Brook wrote:
However, coherence could be made host-type-independent by the host
mapping and unampping pages, so that each page is only mapped into one
guest (or guest CPU) at a time. Just like some clustering filesystems
do to maintain coherence.
You're assuming that a TLB flush
Avi Kivity wrote:
On 03/08/2010 03:03 PM, Paul Brook wrote:
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by communicating over a unix
Paul Brook wrote:
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by communicating over a unix domain socket.
This patch applies to
On 03/08/2010 03:54 AM, Jamie Lokier wrote:
Alexander Graf wrote:
Or we could put in some code that tells the guest the host shm
architecture and only accept x86 on x86 for now. If anyone cares for
other combinations, they're free to implement them.
Seriously, we're looking at an interface
On 03/08/2010 07:16 AM, Avi Kivity wrote:
On 03/08/2010 03:03 PM, Paul Brook wrote:
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by
In a cross environment that becomes extremely hairy. For example the x86
architecture effectively has an implicit write barrier before every
store, and an implicit read barrier before every load.
Btw, x86 doesn't have any implicit barriers due to ordinary loads.
Only stores and atomics
On Tue, Mar 9, 2010 at 5:03 PM, Paul Brook p...@codesourcery.com wrote:
In a cross environment that becomes extremely hairy. For example the x86
architecture effectively has an implicit write barrier before every
store, and an implicit read barrier before every load.
Btw, x86 doesn't have
Am 08.03.2010 um 02:45 schrieb Jamie Lokier ja...@shareable.org:
Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object
as a PCI device in the guest. This patch also supports interrupts
between
guest by communicating over a unix domain socket. This
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a unix domain socket. This patch applies to
the qemu-kvm repository.
Alexander Graf wrote:
Or we could put in some code that tells the guest the host shm
architecture and only accept x86 on x86 for now. If anyone cares for
other combinations, they're free to implement them.
Seriously, we're looking at an interface designed for kvm here. Let's
please
On 03/06/2010 01:52 AM, Cam Macdonell wrote:
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a unix domain socket. This patch applies to the
qemu-kvm repository.
Jamie Lokier wrote:
Alexander Graf wrote:
Or we could put in some code that tells the guest the host shm
architecture and only accept x86 on x86 for now. If anyone cares for
other combinations, they're free to implement them.
Seriously, we're looking at an interface designed for kvm
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by communicating over a unix domain socket.
This patch applies to the qemu-kvm repository.
However, coherence could be made host-type-independent by the host
mapping and unampping pages, so that each page is only mapped into one
guest (or guest CPU) at a time. Just like some clustering filesystems
do to maintain coherence.
You're assuming that a TLB flush implies a write barrier,
On 03/08/2010 03:03 PM, Paul Brook wrote:
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by communicating over a unix domain socket.
This
On Mon, Mar 8, 2010 at 2:56 AM, Avi Kivity a...@redhat.com wrote:
On 03/06/2010 01:52 AM, Cam Macdonell wrote:
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a unix domain socket. This patch applies to
the qemu-kvm repository.
No. All new devices should be fully qdev
Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a unix domain socket. This patch applies to
the qemu-kvm repository.
No. All new devices
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a unix domain socket. This patch applies to the
qemu-kvm repository.
This device now creates a qemu character device
46 matches
Mail list logo