The X fbdev driver is going to make supporting a new fb ioctl pretty
fun. It currently doesn't even support the existing fb ioctls and has a
strange abstraction layer.
I reckon writing a new X driver from scratch (or based on something like
the vnc X driver) would be easier in the long
On Tue, 2007-08-21 at 22:23 -0700, Zachary Amsden wrote:
In general, I/O in a virtual guest is subject to performance problems.
The I/O can not be completed physically, but must be virtualized. This
means trapping and decoding port I/O instructions from the guest OS.
Not only is the trap
On Wed, 2007-08-22 at 16:25 +1000, Rusty Russell wrote:
On Wed, 2007-08-22 at 08:34 +0300, Avi Kivity wrote:
Zachary Amsden wrote:
This patch provides hypercalls for the i386 port I/O instructions,
which vastly helps guests which use native-style drivers. For certain
VMI workloads,
We have an X driver that does minimal performance costing operations.
As we should and will have for our other drivers.
Ok, so you use your own DDX and prevent X vgacrapware to kick in ? Makes
sense.
Ben.
___
Virtualization mailing list
On Tue, 2008-06-03 at 14:49 +0200, Christian Borntraeger wrote:
This patch tries to change hvc_console to not use request_irq/free_irq if
the backend does not use irqs. This allows virtio_console to use hvc_console
without having a linker reference to request_irq/free_irq.
The irq specific
On Mon, 2008-06-16 at 04:30 -0700, Jeremy Fitzhardinge wrote:
The only current user of this interface is mprotect
Do you plan to use it with fork ultimately ?
Ben.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
On Wed, 2008-06-18 at 17:24 -0700, Linus Torvalds wrote:
On Wed, 18 Jun 2008, Jeremy Fitzhardinge wrote:
Along the lines of:
Hell no. There's a reason we have a special set_wrprotect() thing. We can
do it more efficiently on native hardware by just clearing the bit
atomically. No
Which architecture are you interested in? If it isn't x86, you can
probably get anything past Linus ;)
I'll do some measurements to see what effect the batchable
ptep_set_wrprotect() has on native. If it's significant, I'll propose
making it conditional on CONFIG_PARAVIRT.
Oh, I
On Wed, 2009-08-26 at 21:15 +0530, Amit Shah wrote:
- Convert hvc's usage of spinlocks to mutexes. I've no idea how this
will play out; I'm no expert here. But I did try doing this and so far
it all looks OK. No lockups, lockdep warnings, nothing. I have full
debugging enabled.
On Thu, 2009-08-27 at 10:08 +0100, Alan Cox wrote:
- Then, are we certain that there's no case where the tty layer will
call us with some lock held or in an atomic context ? To be honest,
I've totally lost track of the locking rules in tty land lately so it
might well be ok, but something
On Tue, 2009-12-22 at 20:04 +0530, Amit Shah wrote:
From: Rusty Russell ru...@rustcorp.com.au
This is nicer for modern R/O protection. And noone needs it non-const, so
constify the callers as well.
Rusty, do you want me to take these via powerpc ?
Cheers,
Ben.
Signed-off-by: Rusty
On Mon, 2011-01-17 at 12:07 +0100, Peter Zijlstra wrote:
For future rework of try_to_wake_up() we'd like to push part of that
onto the CPU the task is actually going to run on, in order to do so we
need a generic callback from the existing scheduler IPI.
This patch introduces such a generic
();
break;
case PPC_MSG_CALL_FUNC_SINGLE:
generic_smp_call_function_single_interrupt();
Fold that in and add:
Acked-by: Benjamin Herrenschmidt b...@kernel.crashing.org
(We have two variants of the IPIs)
Cheers,
Ben
On Mon, 2011-02-07 at 14:54 +0100, Peter Zijlstra wrote:
On Mon, 2011-02-07 at 10:26 +1100, Benjamin Herrenschmidt wrote:
You missed:
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 9813605..467d122 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc
On Tue, 2011-11-29 at 14:31 +0200, Ohad Ben-Cohen wrote:
Virtio is using memory barriers to control the ordering of
references to the vrings on SMP systems. When the guest is compiled
with SMP support, virtio is only using SMP barriers in order to
avoid incurring the overhead involved with
On Sun, 2011-12-11 at 14:25 +0200, Michael S. Tsirkin wrote:
Forwarding some results by Amos, who run multiple netperf streams in
parallel, from an external box to the guest. TCP_STREAM results were
noisy. This could be due to buffering done by TCP, where packet size
varies even as message
On Mon, 2011-12-19 at 10:19 +0800, Amos Kong wrote:
I tested with the same environment and scenarios.
tested one scenarios for three times and compute the average for more
precision.
Thanks, Amos
- compare results ---
Mon Dec 19 09:51:09 2011
1 -
On Wed, 2012-01-11 at 10:55 +1030, Rusty Russell wrote:
On Tue, 10 Jan 2012 19:03:36 +0200, Michael S. Tsirkin m...@redhat.com
wrote:
On Wed, Dec 21, 2011 at 11:03:25AM +1030, Rusty Russell wrote:
Yes. The idea that we can alter fields in the device-specific config
area is flawed.
On Wed, 2012-01-11 at 08:47 +, Stefan Hajnoczi wrote:
This is also an opportunity to stop using CPU physical addresses in
the ring and instead perform DMA like a normal PCI device (use bus
addresses).
Euh why ?
That would mean in many cases adding a layer of iommu, which will slow
On Wed, 2012-01-11 at 14:28 +, Stefan Hajnoczi wrote:
On Wed, Jan 11, 2012 at 9:10 AM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
On Wed, 2012-01-11 at 08:47 +, Stefan Hajnoczi wrote:
This is also an opportunity to stop using CPU physical addresses in
the ring
On Wed, 2012-01-11 at 17:12 +0200, Michael S. Tsirkin wrote:
This is similar to what we have now. But it's still buggy: e.g. if guest
updates MAC byte by byte, we have no way to know when it's done doing
so.
Do like real HW, there's plenty of options:
- (better) Have a command update MAC
On Wed, 2012-01-11 at 17:21 +0200, Michael S. Tsirkin wrote:
Possible but doesn't let us layer nicely to allow unchanged drivers
that work with all transports (new pci, old pci, non pci).
Something like a command VQ would be a generic transport
that can be hidden behind config-set(...).
I
On Wed, 2012-01-11 at 13:42 -0600, Anthony Liguori wrote:
On 01/11/2012 11:08 AM, Michael S. Tsirkin wrote:
Not sure what you mean. Using VQ is DMA which is pretty common for PCI.
Do you know of a network device that obtains it's mac address via a DMA
transaction?
I wouldn't be
On Wed, 2012-01-11 at 14:26 -0600, Anthony Liguori wrote:
I'd say that's a special case but I see what you're getting at here.
So what about keeping the config space read-only and using control
queues for
everything else?
Which is exactly what Rusty and I are proposing :-) I would go
On Thu, 2012-01-12 at 00:02 +0200, Michael S. Tsirkin wrote:
We could probably have a helper library for sending control messages
which could handle waiting for a ring slot to be free (practically
always the case on control queues), writing the message, sending it
and
waiting for a status
On Thu, 2012-01-12 at 00:13 +0200, Michael S. Tsirkin wrote:
We typically pre-populate the data rings with skb's for 1500 and 9000
bytes packets. Small packets come in immediately in the completion ring,
and large packets via the data ring.
Won't real workloads suffer from packet
On Thu, 2012-01-12 at 00:13 +0200, Michael S. Tsirkin wrote:
Well, I would argue that the network driver world has proven countless
times that those are good ideas :-)
Below you seem to suggest that separate rings like
virtio has now is better than a single ring like Rusty
suggested.
I
On Thu, 2012-01-12 at 12:31 +1030, Rusty Russell wrote:
Are we going to keep guest endian for e.g. virtio net header?
If yes the benefit of switching config space is not that big.
And changes in devices would affect non-PCI transports.
Yep. It would only make sense if we do it for
On Mon, 2012-06-04 at 14:15 +0930, Rusty Russell wrote:
Something along those lines is also needed for remote processors
which
access memory via an IOMMU (e.g. OMAP4's M3 and DSP).
Allocating the memory via the DMA API will seamlessly configure the
relevant IOMMU as needed, and will
On Wed, 2013-02-13 at 15:28 +, Marc Zyngier wrote:
Well, the spec clearly says that the registers reflect the endianess of
the guest, and it makes sense: when performing the MMIO access, KVM
needs to convert between host and guest endianess.
It's actually a horrible idea :-)
What does
On Wed, 2014-08-27 at 20:40 +0930, Rusty Russell wrote:
Hi Andy,
This has long been a source of contention. virtio assumes that
the hypervisor can decode guest-physical addresses.
PowerPC, in particular, doesn't want to pay the cost of IOMMU
manipulations, and all
On Wed, 2014-08-27 at 13:52 +0200, Michael S. Tsirkin wrote:
For x86 as of QEMU 2.0 there's no iommu.
So a reasonable thing to do for that platform
might be to always use iommu *if it's there*.
My understanding is this isn't the case for powerpc?
All 64-bit powerpc have an iommu but not all
On Mon, 2014-09-01 at 10:39 -0700, Andy Lutomirski wrote:
Changes from v1:
- Using the DMA API is optional now. It would be nice to improve the
DMA API to the point that it could be used unconditionally, but s390
proves that we're not there yet.
- Includes patch 4, which fixes DMA
On Mon, 2014-09-01 at 22:55 -0700, Andy Lutomirski wrote:
On x86, at least, I doubt that we'll ever see a physically addressed
PCI virtio device for which ACPI advertises an IOMMU, since any sane
hypervisor will just not advertise an IOMMU for the virtio device.
But are there arm64 or PPC
On Tue, 2014-09-02 at 16:56 -0400, Konrad Rzeszutek Wilk wrote:
On Wed, Sep 03, 2014 at 06:53:33AM +1000, Benjamin Herrenschmidt wrote:
On Mon, 2014-09-01 at 22:55 -0700, Andy Lutomirski wrote:
On x86, at least, I doubt that we'll ever see a physically addressed
PCI virtio device
On Tue, 2014-09-02 at 14:37 -0700, Andy Lutomirski wrote:
Let's take a step back from from the implementation. What is a driver
for a virtio PCI device (i.e. a PCI device with vendor 0x1af4)
supposed to do on ppc64?
Today, it's supposed to send guest physical addresses. We can make that
On Fri, 2014-09-05 at 12:01 +0930, Rusty Russell wrote:
If the answers are both yes, then x86 is going to be able to use
virtio+IOMMU, so PPC looks like the odd one out.
Well, yes and no ... ppc will be able to do that too, it's just
pointless and will suck performances.
Additionally, it will
On Thu, 2014-09-04 at 19:57 -0700, Andy Lutomirski wrote:
There's a third option: try to make virtio-mmio work everywhere
(except s390), at least in the long run. This other benefits: it
makes minimal hypervisors simpler, I think it'll get rid of the limits
on the number of virtio devices in
On Tue, 2014-09-02 at 16:11 -0700, Andy Lutomirski wrote:
I don't think so. I would argue that it's a straight-up bug for QEMU
to expose a physically-addressed virtio-pci device to the guest behind
an emulated IOMMU. QEMU may already be doing that on ppc64, but it
isn't on x86_64 or arm
On Wed, 2014-09-03 at 09:47 +0200, Paolo Bonzini wrote:
IOMMU support for x86 is going to go in this week.
But won't that break virtio on x86 ? Or will virtio continue bypassing
it ? IE, the guest side virtio doesn't expect an IOMMU and doesn't call
the dma mappings ops.
However, it is and
On Tue, 2014-09-02 at 16:42 -0700, Andy Lutomirski wrote:
But there aren't any ACPI systems with both virtio-pci and IOMMUs,
right? So we could say that, henceforth, ACPI systems must declare
whether virtio-pci devices live behind IOMMUs without breaking
backward compatibility.
I don't know
On Tue, 2014-09-02 at 17:32 -0700, Andy Lutomirski wrote:
I agree *except* that implementing it will be a real PITA and (I
think) can't be done without changing code in arch/. My patches plus
an ifdef powerpc will be functionally equivalent, just uglier.
So for powerpc, it's a 2 liner
On Tue, 2014-09-16 at 22:22 -0700, Andy Lutomirski wrote:
On non-PPC systems, virtio_pci should use the DMA API. This fixes
virtio_pci on Xen. On PPC, using the DMA API would break things, so
we need to preserve the old behavior.
The big comment in this patch explains the considerations in
On Wed, 2014-09-17 at 09:49 -0700, David Woodhouse wrote:
On Wed, 2014-09-17 at 09:07 -0700, Andy Lutomirski wrote:
I still think that this is a property of the bus, not the device. x86
has such a mechanism, and this patch uses it transparently.
Right. A device driver should use the
On Wed, 2014-09-17 at 17:16 +0300, Michael S. Tsirkin wrote:
On Wed, Sep 17, 2014 at 08:02:31AM -0400, Benjamin Herrenschmidt wrote:
On Tue, 2014-09-16 at 22:22 -0700, Andy Lutomirski wrote:
On non-PPC systems, virtio_pci should use the DMA API. This fixes
virtio_pci on Xen. On PPC
On Wed, 2014-09-17 at 09:07 -0700, Andy Lutomirski wrote:
It shouldn't. That being said, at some point this problem will need
solving on PPC, and this patch doesn't help much, other than adding
the virtio_ring piece.
I'd really like to see the generic or arch IOMMU code handle this so
that
On Fri, 2014-09-19 at 22:59 -0700, Andy Lutomirski wrote:
Sure.
The question is: should the patches go in to 3.18 as is our should
they wait? It would be straightforward to remove the use_dma_api
switch PPC, s390, and virtio_mmio are ready.
I don't mind the patches going in now in their
On Sun, 2014-09-21 at 15:03 +1000, Benjamin Herrenschmidt wrote:
The exception I mentioned is that I would really like the virtio device
to expose via whatever transport we chose to use (though capability
exchange sounds like a reasonable one) whether the server
implementation is bypassing
On Wed, 2014-09-24 at 14:41 -0700, Andy Lutomirski wrote:
On Sat, Sep 20, 2014 at 10:05 PM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
On Sun, 2014-09-21 at 15:03 +1000, Benjamin Herrenschmidt wrote:
The exception I mentioned is that I would really like the virtio device
On Wed, 2014-09-24 at 14:59 -0700, Andy Lutomirski wrote:
Scratch that idea, then.
The best that I can currently come up with is to say that pre-1.0
devices on PPC bypass the IOMMU and that 1.0 devices on PPC and all
devices on all other architectures do not bypass the IOMMU.
Well, the
On Wed, 2014-09-24 at 15:15 -0700, Andy Lutomirski wrote:
On Wed, Sep 24, 2014 at 3:04 PM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
On Wed, 2014-09-24 at 14:59 -0700, Andy Lutomirski wrote:
Scratch that idea, then.
The best that I can currently come up with is to say
On Mon, 2014-09-29 at 11:55 -0700, Andy Lutomirski wrote:
Rusty and Michael, what's the status of this?
The status is that I still think we need *a* way to actually inform the
guest whether the virtio implementation will or will not bypass the
IOMMU. I don't know Xen enough to figure out how to
On Mon, 2014-09-29 at 13:55 -0700, Andy Lutomirski wrote:
If the eventual solution is to say that virtio 1.0 PCI devices always
respect an IOMMU unless they set a magic flag saying I'm not real
hardware and I bypass the IOMMU, then I don't really object to that,
except that it'll be a mess if
On Mon, 2014-10-06 at 11:59 +0200, Christian Borntraeger wrote:
Just as a comment: On s390 we always considered the memory access as
access to real memory (not device memory) for virtio accesses. I
prefer to not touch the DMA API on s390 as it is quite s390-PCI
specific but it is somewhat
On Wed, 2014-10-22 at 16:17 +0200, Jan Kiszka wrote:
I thought about this again, and I'm not sure anymore if we can use
ACPI
to black-list the incompatible virtio devices. Reason: hotplug. To
my
understanding, the ACPI DRHD tables won't change during runtime when a
device shows up or
On Wed, 2015-03-11 at 23:03 +0100, Greg Kurz wrote:
/* The host notifier will be swapped in adjust_endianness() according to the
* target default endianness. We need to negate this swap if the device uses
* an endianness that is not the default (ppc64le for example).
*/
+static
On Tue, 2015-07-28 at 17:47 -0700, Andy Lutomirski wrote:
Yes, virtio flag. I dislike having a virtio flag at all, but so far
no one has come up with any better ideas. If there was a reliable,
cross-platform mechanism for per-device PCI bus properties, I'd be all
for using that instead.
On Tue, 2015-07-28 at 15:43 -0700, Andy Lutomirski wrote:
Let me try to summarize a proposal:
Add a feature flag that indicates IOMMU support.
New kernels acknowledge that flag on any device that advertises it.
New kernels always respect the IOMMU (except on PowerPC).
Why ? I disagree,
On Tue, 2015-07-28 at 16:33 -0700, Andy Lutomirski wrote:
On Tue, Jul 28, 2015 at 4:21 PM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
On Tue, 2015-07-28 at 15:43 -0700, Andy Lutomirski wrote:
Let me try to summarize a proposal:
Add a feature flag that indicates IOMMU support
On Wed, 2015-07-29 at 10:17 +0200, Paolo Bonzini wrote:
On 29/07/2015 02:47, Andy Lutomirski wrote:
If new kernels ignore the IOMMU for devices that don't set the flag
and there are physical devices that already exist and don't set the
flag, then those devices won't work reliably on
On Tue, 2015-07-28 at 10:16 +0200, Paolo Bonzini wrote:
On 28/07/2015 03:08, Andy Lutomirski wrote:
On Mon, Sep 1, 2014 at 10:39 AM, Andy Lutomirski l...@amacapital.net
wrote:
This fixes virtio on Xen guests as well as on any other platform
that uses virtio_pci on which physical
On Tue, 2015-11-10 at 11:27 +0100, Joerg Roedel wrote:
>
> You have the same problem when real PCIe devices appear that speak
> virtio. I think the only real (still not very nice) solution is to add a
> quirk to powerpc platform code that sets noop dma-ops for the existing
> virtio
On Tue, 2015-11-10 at 14:43 +0200, Michael S. Tsirkin wrote:
> But not virtio-pci I think - that's broken for that usecase since we use
> weaker barriers than required for real IO, as these have measureable
> overhead. We could have a feature "is a real PCI device",
> that's completely
On Tue, 2015-11-10 at 10:54 -0800, Andy Lutomirski wrote:
>
> Does that work on powerpc on existing kernels?
>
> Anyway, here's another crazy idea: make the quirk assume that the
> IOMMU is bypasses if and only if the weak barriers bit is set on
> systems that are missing the new DT binding.
On Mon, 2015-11-09 at 18:18 -0800, Andy Lutomirski wrote:
>
> /* Qumranet donated their vendor ID for devices 0x1000 thru 0x10FF.
> */
> static const struct pci_device_id virtio_pci_id_table[] = {
> { PCI_DEVICE(0x1af4, PCI_ANY_ID) },
> { 0 }
> };
>
> Can we match on that range?
On Mon, 2015-11-09 at 16:46 -0800, Andy Lutomirski wrote:
> The problem here is that in some of the problematic cases the virtio
> driver may not even be loaded. If someone runs an L1 guest with an
> IOMMU-bypassing virtio device and assigns it to L2 using vfio, then
> *boom* L1 crashes. (Same
On Mon, 2015-11-09 at 18:18 -0800, Andy Lutomirski wrote:
>
> Which leaves the special case of Xen, where even preexisting devices
> don't bypass the IOMMU. Can we keep this specific to powerpc and
> sparc? On x86, this problem is basically nonexistent, since the IOMMU
> is properly
On Mon, 2015-11-09 at 21:35 -0800, Andy Lutomirski wrote:
>
> We could do it the other way around: on powerpc, if a PCI device is in
> that range and doesn't have the "bypass" property at all, then it's
> assumed to bypass the IOMMU. This means that everything that
> currently works continues
On Tue, 2015-11-10 at 15:44 -0800, Andy Lutomirski wrote:
>
> > What about partition <-> partition virtio such as what we could do on
> > PAPR systems. That would have the weak barrier bit.
> >
>
> Is it partition <-> partition, bypassing IOMMU?
No.
> I think I'd settle for just something that
On Tue, 2015-11-10 at 10:45 +0100, Knut Omang wrote:
> Can something be done by means of PCIe capabilities?
> ATS (Address Translation Support) seems like a natural choice?
Euh no... ATS is something else completely
Cheers,
Ben.
___
Virtualization
On Tue, 2015-11-10 at 20:46 -0800, Andy Lutomirski wrote:
> Me neither. At least it wouldn't be a regression, but it's still
> crappy.
>
> I think that arm is fine, at least. I was unable to find an arm QEMU
> config that has any problems with my patches.
Ok, give me a few days for my headache
So ...
I've finally tried to sort that out for powerpc and I can't find a way
to make that work that isn't a complete pile of stinking shit.
I'm very tempted to go back to my original idea: virtio itself should
indicate it's "bypassing ability" via the virtio config space or some
other bit (like
On Thu, 2015-11-19 at 23:38 +, David Woodhouse wrote:
>
> I understand that POWER and other platforms don't currently have a
> clean way to indicate that certain device don't have translation. And I
> understand that we may end up with a *quirk* which ensures that the DMA
> API does the right
On Thu, 2016-06-02 at 17:22 +0800, Pan Xinhui wrote:
> Base code to enable qspinlock on powerpc. this patch add some #ifdef
> here and there. Although there is no paravirt related code, we can
> successfully build a qspinlock kernel after apply this patch.
This is missing the IO_SYNC stuff ... It
On Fri, 2016-06-03 at 11:32 +1000, Benjamin Herrenschmidt wrote:
> On Thu, 2016-06-02 at 17:22 +0800, Pan Xinhui wrote:
> >
> > Base code to enable qspinlock on powerpc. this patch add some
> > #ifdef
> > here and there. Although there is no paravirt related code, we
On Fri, 2016-06-03 at 12:10 +0800, xinhui wrote:
> On 2016年06月03日 09:32, Benjamin Herrenschmidt wrote:
> > On Fri, 2016-06-03 at 11:32 +1000, Benjamin Herrenschmidt wrote:
> >> On Thu, 2016-06-02 at 17:22 +0800, Pan Xinhui wrote:
> >>>
> >>> Base code to e
On Mon, 2016-06-06 at 17:59 +0200, Peter Zijlstra wrote:
> On Fri, Jun 03, 2016 at 02:33:47PM +1000, Benjamin Herrenschmidt wrote:
> >
> > - For the above, can you show (or describe) where the qspinlock
> > improves things compared to our current locks.
> So cu
On Wed, 2018-05-23 at 21:50 +0300, Michael S. Tsirkin wrote:
> I re-read that discussion and I'm still unclear on the
> original question, since I got several apparently
> conflicting answers.
>
> I asked:
>
> Why isn't setting VIRTIO_F_IOMMU_PLATFORM on the
> hypervisor side
On Fri, 2018-06-15 at 02:16 -0700, Christoph Hellwig wrote:
> On Wed, Jun 13, 2018 at 11:11:01PM +1000, Benjamin Herrenschmidt wrote:
> > Actually ... the stuff in lib/dma-direct.c seems to be just it, no ?
> >
> > There's no cache flushing and there's no architecture hoo
On Wed, 2018-06-13 at 00:41 -0700, Christoph Hellwig wrote:
> On Mon, Jun 11, 2018 at 01:29:18PM +1000, Benjamin Herrenschmidt wrote:
> > At the risk of repeating myself, let's just do the first pass which is
> > to switch virtio over to always using the DMA API in the actual dat
On Wed, 2018-06-13 at 22:25 +1000, Benjamin Herrenschmidt wrote:
> On Wed, 2018-06-13 at 00:41 -0700, Christoph Hellwig wrote:
> > On Mon, Jun 11, 2018 at 01:29:18PM +1000, Benjamin Herrenschmidt wrote:
> > > At the risk of repeating myself, let's just do the first pass which
On Sun, 2018-06-10 at 19:39 -0700, Ram Pai wrote:
>
> However if the administrator
> ignores/forgets/deliberatey-decides/is-constrained to NOT enable the
> flag, virtio will not be able to pass control to the DMA ops associated
> with the virtio devices. Which means, we have no opportunity to
On Mon, 2018-06-11 at 06:28 +0300, Michael S. Tsirkin wrote:
>
> > However if the administrator
> > ignores/forgets/deliberatey-decides/is-constrained to NOT enable the
> > flag, virtio will not be able to pass control to the DMA ops associated
> > with the virtio devices. Which means, we have no
On Tue, 2018-05-29 at 07:03 -0700, Christoph Hellwig wrote:
> On Tue, May 29, 2018 at 09:56:24AM +1000, Benjamin Herrenschmidt wrote:
> > I don't think forcing the addition of an emulated iommu in the middle
> > just to work around the fact that virtio "cheats" and do
On Fri, 2018-05-25 at 20:45 +0300, Michael S. Tsirkin wrote:
> On Thu, May 24, 2018 at 08:27:04AM +1000, Benjamin Herrenschmidt wrote:
> > On Wed, 2018-05-23 at 21:50 +0300, Michael S. Tsirkin wrote:
> >
> > > I re-read that discussion and I'm still unclear on the
> >
On Tue, 2018-05-29 at 09:48 +1000, Benjamin Herrenschmidt wrote:
> > Well it's not supposed to be much slower for the static case.
> >
> > vhost has a cache so should be fine.
> >
> > A while ago Paolo implemented a translation cache which should be
> > perf
On Mon, 2018-06-04 at 18:57 +1000, David Gibson wrote:
>
> > - First qemu doesn't know that the guest will switch to "secure mode"
> > in advance. There is no difference between a normal and a secure
> > partition until the partition does the magic UV call to "enter secure
> > mode" and qemu
On Mon, 2018-06-04 at 05:55 -0700, Christoph Hellwig wrote:
> On Mon, Jun 04, 2018 at 03:43:09PM +0300, Michael S. Tsirkin wrote:
> > Another is that given the basic functionality is in there, optimizations
> > can possibly wait until per-device quirks in DMA API are supported.
>
> We have had
On Mon, 2018-06-04 at 19:21 +0300, Michael S. Tsirkin wrote:
>
> > > > - First qemu doesn't know that the guest will switch to "secure mode"
> > > > in advance. There is no difference between a normal and a secure
> > > > partition until the partition does the magic UV call to "enter secure
> > >
On Thu, 2018-08-02 at 18:41 +0300, Michael S. Tsirkin wrote:
>
> > I don't completely agree:
> >
> > 1 - VIRTIO_F_IOMMU_PLATFORM is a property of the "other side", ie qemu
> > for example. It indicates that the peer bypasses the normal platform
> > iommu. The platform code in the guest has no
On Wed, 2018-08-01 at 01:36 -0700, Christoph Hellwig wrote:
> We just need to figure out how to deal with devices that deviate
> from the default. One things is that VIRTIO_F_IOMMU_PLATFORM really
> should become VIRTIO_F_PLATFORM_DMA to cover the cases of non-iommu
> dma tweaks (offsets, cache
On Thu, 2018-08-02 at 20:19 +0300, Michael S. Tsirkin wrote:
>
> I see. So yes, given that device does not know or care, using
> virtio features is an awkward fit.
>
> So let's say as a quick fix for you maybe we could generalize the
> xen_domain hack, instead of just checking xen_domain check
On Thu, 2018-08-02 at 00:56 +0300, Michael S. Tsirkin wrote:
> > but it's not, VMs are
> > created in "legacy" mode all the times and we don't know at VM creation
> > time whether it will become a secure VM or not. The way our secure VMs
> > work is that they start as a normal VM, load a secure
On Thu, 2018-08-02 at 23:52 +0300, Michael S. Tsirkin wrote:
> > Yes, this is the purpose of Anshuman original patch (I haven't looked
> > at the details of the patch in a while but that's what I told him to
> > implement ;-) :
> >
> > - Make virtio always use DMA ops to simplify the code path
On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > 2- Make virtio use the DMA API with our custom platform-provided
> > swiotlb callbacks when needed, that is when not using IOMMU *and*
> > running on a secure VM in our case.
>
> And total NAK the customer platform-provided part of
On Tue, 2018-07-31 at 10:30 -0700, Christoph Hellwig wrote:
> > However the question people raise is that DMA API is already full of
> > arch-specific tricks the likes of which are outlined in your post linked
> > above. How is this one much worse?
>
> None of these warts is visible to the
On Fri, 2018-08-03 at 22:07 +0300, Michael S. Tsirkin wrote:
> On Fri, Aug 03, 2018 at 10:58:36AM -0500, Benjamin Herrenschmidt wrote:
> > On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > > > 2- Make virtio use the DMA API with our custom platform-pro
On Fri, 2018-08-03 at 22:07 +0300, Michael S. Tsirkin wrote:
> On Fri, Aug 03, 2018 at 10:58:36AM -0500, Benjamin Herrenschmidt wrote:
> > On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > > > 2- Make virtio use the DMA API with our custom platform-pro
On Fri, 2018-08-03 at 22:07 +0300, Michael S. Tsirkin wrote:
> On Fri, Aug 03, 2018 at 10:58:36AM -0500, Benjamin Herrenschmidt wrote:
> > On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > > > 2- Make virtio use the DMA API with our custom platform-pro
On Sun, 2018-08-05 at 03:09 +0300, Michael S. Tsirkin wrote:
> It seems that the fact that within guest it's implemented using a bounce
> buffer and that it's easiest to do by switching virtio to use the DMA API
> isn't something virtio spec concerns itself with.
Right, this is my reasoning as
1 - 100 of 125 matches
Mail list logo