On 03/10/2010 07:41 PM, Paul Brook wrote:
You're much better off using a bulk-data transfer API that relaxes
coherency requirements. IOW, shared memory doesn't make sense for TCG
Rather, tcg doesn't make sense for shared memory smp. But we knew that
already.
In think TCG SMP is a
On Thu, 11 Mar 2010, Nick Piggin wrote:
On Thu, Mar 11, 2010 at 03:10:47AM +, Jamie Lokier wrote:
Paul Brook wrote:
In a cross environment that becomes extremely hairy. For example the
x86
architecture effectively has an implicit write barrier before every
store, and
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using mmio
callbacks instead of directly, and issue the appropriate barriers there.
Not good enough unless you want to severely restrict the use of shared
memory within the guest.
For
On 03/10/2010 06:38 AM, Cam Macdonell wrote:
On Tue, Mar 9, 2010 at 5:03 PM, Paul Brookp...@codesourcery.com wrote:
In a cross environment that becomes extremely hairy. For example the x86
architecture effectively has an implicit write barrier before every
store, and an implicit read
As of March 2009[1] Intel guarantees that memory reads occur in order
(they may only be reordered relative to writes). It appears AMD do not
provide this guarantee, which could be an interesting problem for
heterogeneous migration..
Interesting, but what ordering would cause problems
On 03/10/2010 03:25 AM, Avi Kivity wrote:
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using
mmio callbacks instead of directly, and issue the appropriate
barriers there.
Not good enough unless you want to severely restrict the
On 03/10/2010 07:13 PM, Anthony Liguori wrote:
On 03/10/2010 03:25 AM, Avi Kivity wrote:
On 03/09/2010 11:44 PM, Anthony Liguori wrote:
Ah yes. For cross tcg environments you can map the memory using
mmio callbacks instead of directly, and issue the appropriate
barriers there.
Not good
You're much better off using a bulk-data transfer API that relaxes
coherency requirements. IOW, shared memory doesn't make sense for TCG
Rather, tcg doesn't make sense for shared memory smp. But we knew that
already.
In think TCG SMP is a hard, but soluble problem, especially when
Paul Brook wrote:
In a cross environment that becomes extremely hairy. For example the x86
architecture effectively has an implicit write barrier before every
store, and an implicit read barrier before every load.
Btw, x86 doesn't have any implicit barriers due to ordinary loads.
On Thu, Mar 11, 2010 at 03:10:47AM +, Jamie Lokier wrote:
Paul Brook wrote:
In a cross environment that becomes extremely hairy. For example the
x86
architecture effectively has an implicit write barrier before every
store, and an implicit read barrier before every load.
On 03/10/2010 07:41 PM, Paul Brook wrote:
You're much better off using a bulk-data transfer API that relaxes
coherency requirements. IOW, shared memory doesn't make sense for TCG
Rather, tcg doesn't make sense for shared memory smp. But we knew that
already.
In think TCG SMP is
Paul Brook wrote:
However, coherence could be made host-type-independent by the host
mapping and unampping pages, so that each page is only mapped into one
guest (or guest CPU) at a time. Just like some clustering filesystems
do to maintain coherence.
You're assuming that a TLB flush
Avi Kivity wrote:
On 03/08/2010 03:03 PM, Paul Brook wrote:
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by communicating over a unix
Paul Brook wrote:
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by communicating over a unix domain socket.
This patch applies to
On 03/08/2010 03:54 AM, Jamie Lokier wrote:
Alexander Graf wrote:
Or we could put in some code that tells the guest the host shm
architecture and only accept x86 on x86 for now. If anyone cares for
other combinations, they're free to implement them.
Seriously, we're looking at an interface
On 03/08/2010 07:16 AM, Avi Kivity wrote:
On 03/08/2010 03:03 PM, Paul Brook wrote:
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by
In a cross environment that becomes extremely hairy. For example the x86
architecture effectively has an implicit write barrier before every
store, and an implicit read barrier before every load.
Btw, x86 doesn't have any implicit barriers due to ordinary loads.
Only stores and atomics
On Tue, Mar 9, 2010 at 5:03 PM, Paul Brook p...@codesourcery.com wrote:
In a cross environment that becomes extremely hairy. For example the x86
architecture effectively has an implicit write barrier before every
store, and an implicit read barrier before every load.
Btw, x86 doesn't have
Am 08.03.2010 um 02:45 schrieb Jamie Lokier ja...@shareable.org:
Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object
as a PCI device in the guest. This patch also supports interrupts
between
guest by communicating over a unix domain socket. This
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a unix domain socket. This patch applies to
the qemu-kvm repository.
Alexander Graf wrote:
Or we could put in some code that tells the guest the host shm
architecture and only accept x86 on x86 for now. If anyone cares for
other combinations, they're free to implement them.
Seriously, we're looking at an interface designed for kvm here. Let's
please
Jamie Lokier wrote:
Alexander Graf wrote:
Or we could put in some code that tells the guest the host shm
architecture and only accept x86 on x86 for now. If anyone cares for
other combinations, they're free to implement them.
Seriously, we're looking at an interface designed for kvm
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by communicating over a unix domain socket.
This patch applies to the qemu-kvm repository.
However, coherence could be made host-type-independent by the host
mapping and unampping pages, so that each page is only mapped into one
guest (or guest CPU) at a time. Just like some clustering filesystems
do to maintain coherence.
You're assuming that a TLB flush implies a write barrier,
On 03/08/2010 03:03 PM, Paul Brook wrote:
On 03/08/2010 12:53 AM, Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory
object as a PCI device in the guest. This patch also supports
interrupts between guest by communicating over a unix domain socket.
This
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a unix domain socket. This patch applies to
the qemu-kvm repository.
No. All new devices should be fully qdev
Paul Brook wrote:
Support an inter-vm shared memory device that maps a shared-memory object
as a PCI device in the guest. This patch also supports interrupts between
guest by communicating over a unix domain socket. This patch applies to
the qemu-kvm repository.
No. All new devices
27 matches
Mail list logo