On Mon, 2019-07-15 at 19:03 -0300, Thiago Jung Bauermann wrote:
> > > Indeed. The idea is that QEMU can offer the flag, old guests can
> > > reject
> > > it (or even new guests can reject it, if they decide not to
> > > convert into
> > > secure VMs) and the feature negotiation will succeed with
On Thu, 2018-08-09 at 08:13 +1000, Benjamin Herrenschmidt wrote:
> > For completeness, virtio could also have its own bounce buffer
> > outside of DMA API one. I don't see lots of benefits to this
> > though.
>
> Not fan of that either...
To elaborate a bit ...
For our
On Wed, 2018-08-08 at 23:31 +0300, Michael S. Tsirkin wrote:
> On Wed, Aug 08, 2018 at 11:18:13PM +1000, Benjamin Herrenschmidt wrote:
> > Sure, but all of this is just the configuration of the iommu. But I
> > think we agree here, and your point remains valid, indeed my pr
On Wed, 2018-08-08 at 05:30 -0700, Christoph Hellwig wrote:
> On Wed, Aug 08, 2018 at 08:07:49PM +1000, Benjamin Herrenschmidt wrote:
> > Qemu virtio bypasses that iommu when the VIRTIO_F_IOMMU_PLATFORM flag
> > is not set (default) but there's nothing in the device-tree to tel
On Tue, 2018-08-07 at 23:31 -0700, Christoph Hellwig wrote:
>
> You don't need to set them the time you go secure. You just need to
> set the flag from the beginning on any VM you might want to go secure.
> Or for simplicity just any VM - if the DT/ACPI tables exposed by
> qemu are good enough
On Tue, 2018-08-07 at 06:55 -0700, Christoph Hellwig wrote:
> On Tue, Aug 07, 2018 at 04:42:44PM +1000, Benjamin Herrenschmidt wrote:
> > Note that I can make it so that the same DMA ops (basically standard
> > swiotlb ops without arch hacks) work for both "direct virtio&
On Mon, 2018-08-06 at 23:27 -0700, Christoph Hellwig wrote:
> On Tue, Aug 07, 2018 at 08:13:56AM +1000, Benjamin Herrenschmidt wrote:
> > It would be indeed ideal if all we had to do was setup some kind of
> > bus_dma_mask on all PCI devices and have virtio automagically insert
&
On Mon, 2018-08-06 at 23:21 -0700, Christoph Hellwig wrote:
> On Tue, Aug 07, 2018 at 05:52:12AM +1000, Benjamin Herrenschmidt wrote:
> > > It is your job to write a coherent interface specification that does
> > > not depend on the used components. The hypervisor might
On Tue, 2018-08-07 at 02:45 +0300, Michael S. Tsirkin wrote:
> > OK well, assuming Christoph can solve the direct case in a way that
> > also work for the virtio !iommu case, we still want some bit of logic
> > somewhere that will "switch" to swiotlb based ops if the DMA mask is
> > limited.
> >
On Mon, 2018-08-06 at 23:35 +0300, Michael S. Tsirkin wrote:
> On Tue, Aug 07, 2018 at 05:56:59AM +1000, Benjamin Herrenschmidt wrote:
> > On Mon, 2018-08-06 at 16:46 +0300, Michael S. Tsirkin wrote:
> > >
> > > > Right, we'll need some quirk to disable balloons
On Tue, 2018-08-07 at 08:13 +1000, Benjamin Herrenschmidt wrote:
>
> OK well, assuming Christoph can solve the direct case in a way that
> also work for the virtio !iommu case, we still want some bit of logic
> somewhere that will "switch" to swiotlb based ops if the
On Tue, 2018-08-07 at 00:46 +0300, Michael S. Tsirkin wrote:
> On Tue, Aug 07, 2018 at 07:26:35AM +1000, Benjamin Herrenschmidt wrote:
> > On Mon, 2018-08-06 at 23:35 +0300, Michael S. Tsirkin wrote:
> > > > As I said replying to Christoph, we are "leaking" into
On Mon, 2018-08-06 at 23:35 +0300, Michael S. Tsirkin wrote:
> > As I said replying to Christoph, we are "leaking" into the interface
> > something here that is really what's the VM is doing to itself, which
> > is to stash its memory away in an inaccessible place.
> >
> > Cheers,
> > Ben.
>
> I
On Mon, 2018-08-06 at 16:46 +0300, Michael S. Tsirkin wrote:
>
> > Right, we'll need some quirk to disable balloons in the guest I
> > suppose.
> >
> > Passing something from libvirt is cumbersome because the end user may
> > not even need to know about secure VMs. There are use cases where the
On Mon, 2018-08-06 at 02:42 -0700, Christoph Hellwig wrote:
> On Mon, Aug 06, 2018 at 07:16:47AM +1000, Benjamin Herrenschmidt wrote:
> > Who would set this bit ? qemu ? Under what circumstances ?
>
> I don't really care who sets what. The implementation might not even
On Mon, 2018-08-06 at 07:16 +1000, Benjamin Herrenschmidt wrote:
> I'm trying to understand because the limitation is not a device side
> limitation, it's not a qemu limitation, it's actually more of a VM
> limitation. It has most of its memory pages made inaccessible for
> secu
On Sun, 2018-08-05 at 00:29 -0700, Christoph Hellwig wrote:
> On Sun, Aug 05, 2018 at 11:10:15AM +1000, Benjamin Herrenschmidt wrote:
> > - One you have rejected, which is to have a way for "no-iommu" virtio
> > (which still doesn't use an iommu on the q
On Sun, 2018-08-05 at 03:22 +0300, Michael S. Tsirkin wrote:
> I see the allure of this, but I think down the road you will
> discover passing a flag in libvirt XML saying
> "please use a secure mode" or whatever is a good idea.
>
> Even thought it is probably not required to address this
>
On Sat, 2018-08-04 at 01:21 -0700, Christoph Hellwig wrote:
> No matter if you like it or not (I don't!) virtio is defined to bypass
> dma translations, it is very clearly stated in the spec. It has some
> ill-defined bits to bypass it, so if you want the dma mapping API
> to be used you'll have
On Sun, 2018-08-05 at 03:09 +0300, Michael S. Tsirkin wrote:
> It seems that the fact that within guest it's implemented using a bounce
> buffer and that it's easiest to do by switching virtio to use the DMA API
> isn't something virtio spec concerns itself with.
Right, this is my reasoning as
On Fri, 2018-08-03 at 22:08 +0300, Michael S. Tsirkin wrote:
> > > > Please go through these patches and review whether this approach broadly
> > > > makes sense. I will appreciate suggestions, inputs, comments regarding
> > > > the patches or the approach in general. Thank you.
> > >
> > > Jason
On Fri, 2018-08-03 at 22:07 +0300, Michael S. Tsirkin wrote:
> On Fri, Aug 03, 2018 at 10:58:36AM -0500, Benjamin Herrenschmidt wrote:
> > On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > > > 2- Make virtio use the DMA API with our custom platform-pro
On Fri, 2018-08-03 at 22:07 +0300, Michael S. Tsirkin wrote:
> On Fri, Aug 03, 2018 at 10:58:36AM -0500, Benjamin Herrenschmidt wrote:
> > On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > > > 2- Make virtio use the DMA API with our custom platform-pro
On Fri, 2018-08-03 at 22:07 +0300, Michael S. Tsirkin wrote:
> On Fri, Aug 03, 2018 at 10:58:36AM -0500, Benjamin Herrenschmidt wrote:
> > On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > > > 2- Make virtio use the DMA API with our custom platform-pro
On Fri, 2018-08-03 at 22:07 +0300, Michael S. Tsirkin wrote:
> On Fri, Aug 03, 2018 at 10:58:36AM -0500, Benjamin Herrenschmidt wrote:
> > On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > > > 2- Make virtio use the DMA API with our custom platform-pro
On Fri, 2018-08-03 at 09:02 -0700, Christoph Hellwig wrote:
> On Fri, Aug 03, 2018 at 10:58:36AM -0500, Benjamin Herrenschmidt wrote:
> > On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > > > 2- Make virtio use the DMA API with our custom platform-provided
>
On Fri, 2018-08-03 at 00:05 -0700, Christoph Hellwig wrote:
> > 2- Make virtio use the DMA API with our custom platform-provided
> > swiotlb callbacks when needed, that is when not using IOMMU *and*
> > running on a secure VM in our case.
>
> And total NAK the customer platform-provided part of
On Thu, 2018-08-02 at 23:52 +0300, Michael S. Tsirkin wrote:
> > Yes, this is the purpose of Anshuman original patch (I haven't looked
> > at the details of the patch in a while but that's what I told him to
> > implement ;-) :
> >
> > - Make virtio always use DMA ops to simplify the code path
On Thu, 2018-08-02 at 20:19 +0300, Michael S. Tsirkin wrote:
>
> I see. So yes, given that device does not know or care, using
> virtio features is an awkward fit.
>
> So let's say as a quick fix for you maybe we could generalize the
> xen_domain hack, instead of just checking xen_domain check
On Thu, 2018-08-02 at 18:41 +0300, Michael S. Tsirkin wrote:
>
> > I don't completely agree:
> >
> > 1 - VIRTIO_F_IOMMU_PLATFORM is a property of the "other side", ie qemu
> > for example. It indicates that the peer bypasses the normal platform
> > iommu. The platform code in the guest has no
On Thu, 2018-08-02 at 00:56 +0300, Michael S. Tsirkin wrote:
> > but it's not, VMs are
> > created in "legacy" mode all the times and we don't know at VM creation
> > time whether it will become a secure VM or not. The way our secure VMs
> > work is that they start as a normal VM, load a secure
On Wed, 2018-08-01 at 01:36 -0700, Christoph Hellwig wrote:
> We just need to figure out how to deal with devices that deviate
> from the default. One things is that VIRTIO_F_IOMMU_PLATFORM really
> should become VIRTIO_F_PLATFORM_DMA to cover the cases of non-iommu
> dma tweaks (offsets, cache
On Tue, 2018-07-31 at 10:30 -0700, Christoph Hellwig wrote:
> > However the question people raise is that DMA API is already full of
> > arch-specific tricks the likes of which are outlined in your post linked
> > above. How is this one much worse?
>
> None of these warts is visible to the
On Fri, 2018-06-15 at 02:16 -0700, Christoph Hellwig wrote:
> On Wed, Jun 13, 2018 at 11:11:01PM +1000, Benjamin Herrenschmidt wrote:
> > Actually ... the stuff in lib/dma-direct.c seems to be just it, no ?
> >
> > There's no cache flushing and there's no architecture hoo
On Wed, 2018-06-13 at 22:25 +1000, Benjamin Herrenschmidt wrote:
> On Wed, 2018-06-13 at 00:41 -0700, Christoph Hellwig wrote:
> > On Mon, Jun 11, 2018 at 01:29:18PM +1000, Benjamin Herrenschmidt wrote:
> > > At the risk of repeating myself, let's just do the first pass which
On Wed, 2018-06-13 at 00:41 -0700, Christoph Hellwig wrote:
> On Mon, Jun 11, 2018 at 01:29:18PM +1000, Benjamin Herrenschmidt wrote:
> > At the risk of repeating myself, let's just do the first pass which is
> > to switch virtio over to always using the DMA API in the actual dat
On Mon, 2018-06-11 at 06:28 +0300, Michael S. Tsirkin wrote:
>
> > However if the administrator
> > ignores/forgets/deliberatey-decides/is-constrained to NOT enable the
> > flag, virtio will not be able to pass control to the DMA ops associated
> > with the virtio devices. Which means, we have no
On Sun, 2018-06-10 at 19:39 -0700, Ram Pai wrote:
>
> However if the administrator
> ignores/forgets/deliberatey-decides/is-constrained to NOT enable the
> flag, virtio will not be able to pass control to the DMA ops associated
> with the virtio devices. Which means, we have no opportunity to
On Mon, 2018-06-04 at 19:21 +0300, Michael S. Tsirkin wrote:
>
> > > > - First qemu doesn't know that the guest will switch to "secure mode"
> > > > in advance. There is no difference between a normal and a secure
> > > > partition until the partition does the magic UV call to "enter secure
> > >
On Mon, 2018-06-04 at 05:55 -0700, Christoph Hellwig wrote:
> On Mon, Jun 04, 2018 at 03:43:09PM +0300, Michael S. Tsirkin wrote:
> > Another is that given the basic functionality is in there, optimizations
> > can possibly wait until per-device quirks in DMA API are supported.
>
> We have had
On Mon, 2018-06-04 at 18:57 +1000, David Gibson wrote:
>
> > - First qemu doesn't know that the guest will switch to "secure mode"
> > in advance. There is no difference between a normal and a secure
> > partition until the partition does the magic UV call to "enter secure
> > mode" and qemu
On Tue, 2018-05-29 at 07:03 -0700, Christoph Hellwig wrote:
> On Tue, May 29, 2018 at 09:56:24AM +1000, Benjamin Herrenschmidt wrote:
> > I don't think forcing the addition of an emulated iommu in the middle
> > just to work around the fact that virtio "cheats" and do
On Tue, 2018-05-29 at 09:48 +1000, Benjamin Herrenschmidt wrote:
> > Well it's not supposed to be much slower for the static case.
> >
> > vhost has a cache so should be fine.
> >
> > A while ago Paolo implemented a translation cache which should be
> > perf
On Fri, 2018-05-25 at 20:45 +0300, Michael S. Tsirkin wrote:
> On Thu, May 24, 2018 at 08:27:04AM +1000, Benjamin Herrenschmidt wrote:
> > On Wed, 2018-05-23 at 21:50 +0300, Michael S. Tsirkin wrote:
> >
> > > I re-read that discussion and I'm still unclear on the
> >
On Wed, 2018-05-23 at 21:50 +0300, Michael S. Tsirkin wrote:
> I re-read that discussion and I'm still unclear on the
> original question, since I got several apparently
> conflicting answers.
>
> I asked:
>
> Why isn't setting VIRTIO_F_IOMMU_PLATFORM on the
> hypervisor side
On Fri, 2018-04-06 at 00:16 -0700, Christoph Hellwig wrote:
> On Fri, Apr 06, 2018 at 08:23:10AM +0530, Anshuman Khandual wrote:
> > On 04/06/2018 02:48 AM, Benjamin Herrenschmidt wrote:
> > > On Thu, 2018-04-05 at 21:34 +0300, Michael S. Tsirkin wrote:
> > > > &g
On Thu, 2018-04-05 at 21:34 +0300, Michael S. Tsirkin wrote:
> > In this specific case, because that would make qemu expect an iommu,
> > and there isn't one.
>
>
> I think that you can set iommu_platform in qemu without an iommu.
No I mean the platform has one but it's not desirable for it to
On Thu, 2018-04-05 at 17:54 +0300, Michael S. Tsirkin wrote:
> On Thu, Apr 05, 2018 at 08:09:30PM +0530, Anshuman Khandual wrote:
> > On 04/05/2018 04:26 PM, Anshuman Khandual wrote:
> > > There are certian platforms which would like to use SWIOTLB based DMA API
> > > for bouncing purpose without
On Mon, 2016-06-06 at 17:59 +0200, Peter Zijlstra wrote:
> On Fri, Jun 03, 2016 at 02:33:47PM +1000, Benjamin Herrenschmidt wrote:
> >
> > - For the above, can you show (or describe) where the qspinlock
> > improves things compared to our current locks.
> So cu
On Fri, 2016-06-03 at 12:10 +0800, xinhui wrote:
> On 2016年06月03日 09:32, Benjamin Herrenschmidt wrote:
> > On Fri, 2016-06-03 at 11:32 +1000, Benjamin Herrenschmidt wrote:
> >> On Thu, 2016-06-02 at 17:22 +0800, Pan Xinhui wrote:
> >>>
> >>> Base code to e
On Fri, 2016-06-03 at 11:32 +1000, Benjamin Herrenschmidt wrote:
> On Thu, 2016-06-02 at 17:22 +0800, Pan Xinhui wrote:
> >
> > Base code to enable qspinlock on powerpc. this patch add some
> > #ifdef
> > here and there. Although there is no paravirt related code, we
On Thu, 2016-06-02 at 17:22 +0800, Pan Xinhui wrote:
> Base code to enable qspinlock on powerpc. this patch add some #ifdef
> here and there. Although there is no paravirt related code, we can
> successfully build a qspinlock kernel after apply this patch.
This is missing the IO_SYNC stuff ... It
On Thu, 2015-11-19 at 23:38 +, David Woodhouse wrote:
>
> I understand that POWER and other platforms don't currently have a
> clean way to indicate that certain device don't have translation. And I
> understand that we may end up with a *quirk* which ensures that the DMA
> API does the right
On Tue, 2015-11-10 at 11:27 +0100, Joerg Roedel wrote:
>
> You have the same problem when real PCIe devices appear that speak
> virtio. I think the only real (still not very nice) solution is to add a
> quirk to powerpc platform code that sets noop dma-ops for the existing
> virtio
On Tue, 2015-11-10 at 14:43 +0200, Michael S. Tsirkin wrote:
> But not virtio-pci I think - that's broken for that usecase since we use
> weaker barriers than required for real IO, as these have measureable
> overhead. We could have a feature "is a real PCI device",
> that's completely
On Tue, 2015-11-10 at 10:54 -0800, Andy Lutomirski wrote:
>
> Does that work on powerpc on existing kernels?
>
> Anyway, here's another crazy idea: make the quirk assume that the
> IOMMU is bypasses if and only if the weak barriers bit is set on
> systems that are missing the new DT binding.
On Mon, 2015-11-09 at 21:35 -0800, Andy Lutomirski wrote:
>
> We could do it the other way around: on powerpc, if a PCI device is in
> that range and doesn't have the "bypass" property at all, then it's
> assumed to bypass the IOMMU. This means that everything that
> currently works continues
On Tue, 2015-11-10 at 15:44 -0800, Andy Lutomirski wrote:
>
> > What about partition <-> partition virtio such as what we could do on
> > PAPR systems. That would have the weak barrier bit.
> >
>
> Is it partition <-> partition, bypassing IOMMU?
No.
> I think I'd settle for just something that
On Tue, 2015-11-10 at 10:45 +0100, Knut Omang wrote:
> Can something be done by means of PCIe capabilities?
> ATS (Address Translation Support) seems like a natural choice?
Euh no... ATS is something else completely
Cheers,
Ben.
___
Virtualization
On Tue, 2015-11-10 at 20:46 -0800, Andy Lutomirski wrote:
> Me neither. At least it wouldn't be a regression, but it's still
> crappy.
>
> I think that arm is fine, at least. I was unable to find an arm QEMU
> config that has any problems with my patches.
Ok, give me a few days for my headache
On Mon, 2015-11-09 at 18:18 -0800, Andy Lutomirski wrote:
>
> /* Qumranet donated their vendor ID for devices 0x1000 thru 0x10FF.
> */
> static const struct pci_device_id virtio_pci_id_table[] = {
> { PCI_DEVICE(0x1af4, PCI_ANY_ID) },
> { 0 }
> };
>
> Can we match on that range?
On Mon, 2015-11-09 at 16:46 -0800, Andy Lutomirski wrote:
> The problem here is that in some of the problematic cases the virtio
> driver may not even be loaded. If someone runs an L1 guest with an
> IOMMU-bypassing virtio device and assigns it to L2 using vfio, then
> *boom* L1 crashes. (Same
On Mon, 2015-11-09 at 18:18 -0800, Andy Lutomirski wrote:
>
> Which leaves the special case of Xen, where even preexisting devices
> don't bypass the IOMMU. Can we keep this specific to powerpc and
> sparc? On x86, this problem is basically nonexistent, since the IOMMU
> is properly
So ...
I've finally tried to sort that out for powerpc and I can't find a way
to make that work that isn't a complete pile of stinking shit.
I'm very tempted to go back to my original idea: virtio itself should
indicate it's "bypassing ability" via the virtio config space or some
other bit (like
On Wed, 2015-07-29 at 10:17 +0200, Paolo Bonzini wrote:
On 29/07/2015 02:47, Andy Lutomirski wrote:
If new kernels ignore the IOMMU for devices that don't set the flag
and there are physical devices that already exist and don't set the
flag, then those devices won't work reliably on
On Tue, 2015-07-28 at 17:47 -0700, Andy Lutomirski wrote:
Yes, virtio flag. I dislike having a virtio flag at all, but so far
no one has come up with any better ideas. If there was a reliable,
cross-platform mechanism for per-device PCI bus properties, I'd be all
for using that instead.
On Tue, 2015-07-28 at 15:43 -0700, Andy Lutomirski wrote:
Let me try to summarize a proposal:
Add a feature flag that indicates IOMMU support.
New kernels acknowledge that flag on any device that advertises it.
New kernels always respect the IOMMU (except on PowerPC).
Why ? I disagree,
On Tue, 2015-07-28 at 16:33 -0700, Andy Lutomirski wrote:
On Tue, Jul 28, 2015 at 4:21 PM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
On Tue, 2015-07-28 at 15:43 -0700, Andy Lutomirski wrote:
Let me try to summarize a proposal:
Add a feature flag that indicates IOMMU support
On Tue, 2015-07-28 at 10:16 +0200, Paolo Bonzini wrote:
On 28/07/2015 03:08, Andy Lutomirski wrote:
On Mon, Sep 1, 2014 at 10:39 AM, Andy Lutomirski l...@amacapital.net
wrote:
This fixes virtio on Xen guests as well as on any other platform
that uses virtio_pci on which physical
On Wed, 2015-03-11 at 23:03 +0100, Greg Kurz wrote:
/* The host notifier will be swapped in adjust_endianness() according to the
* target default endianness. We need to negate this swap if the device uses
* an endianness that is not the default (ppc64le for example).
*/
+static
On Wed, 2014-10-22 at 16:17 +0200, Jan Kiszka wrote:
I thought about this again, and I'm not sure anymore if we can use
ACPI
to black-list the incompatible virtio devices. Reason: hotplug. To
my
understanding, the ACPI DRHD tables won't change during runtime when a
device shows up or
On Mon, 2014-10-06 at 11:59 +0200, Christian Borntraeger wrote:
Just as a comment: On s390 we always considered the memory access as
access to real memory (not device memory) for virtio accesses. I
prefer to not touch the DMA API on s390 as it is quite s390-PCI
specific but it is somewhat
On Mon, 2014-09-29 at 11:55 -0700, Andy Lutomirski wrote:
Rusty and Michael, what's the status of this?
The status is that I still think we need *a* way to actually inform the
guest whether the virtio implementation will or will not bypass the
IOMMU. I don't know Xen enough to figure out how to
On Mon, 2014-09-29 at 13:55 -0700, Andy Lutomirski wrote:
If the eventual solution is to say that virtio 1.0 PCI devices always
respect an IOMMU unless they set a magic flag saying I'm not real
hardware and I bypass the IOMMU, then I don't really object to that,
except that it'll be a mess if
On Wed, 2014-09-24 at 14:41 -0700, Andy Lutomirski wrote:
On Sat, Sep 20, 2014 at 10:05 PM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
On Sun, 2014-09-21 at 15:03 +1000, Benjamin Herrenschmidt wrote:
The exception I mentioned is that I would really like the virtio device
On Wed, 2014-09-24 at 14:59 -0700, Andy Lutomirski wrote:
Scratch that idea, then.
The best that I can currently come up with is to say that pre-1.0
devices on PPC bypass the IOMMU and that 1.0 devices on PPC and all
devices on all other architectures do not bypass the IOMMU.
Well, the
On Wed, 2014-09-24 at 15:15 -0700, Andy Lutomirski wrote:
On Wed, Sep 24, 2014 at 3:04 PM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
On Wed, 2014-09-24 at 14:59 -0700, Andy Lutomirski wrote:
Scratch that idea, then.
The best that I can currently come up with is to say
On Fri, 2014-09-19 at 22:59 -0700, Andy Lutomirski wrote:
Sure.
The question is: should the patches go in to 3.18 as is our should
they wait? It would be straightforward to remove the use_dma_api
switch PPC, s390, and virtio_mmio are ready.
I don't mind the patches going in now in their
On Sun, 2014-09-21 at 15:03 +1000, Benjamin Herrenschmidt wrote:
The exception I mentioned is that I would really like the virtio device
to expose via whatever transport we chose to use (though capability
exchange sounds like a reasonable one) whether the server
implementation is bypassing
On Wed, 2014-09-17 at 09:49 -0700, David Woodhouse wrote:
On Wed, 2014-09-17 at 09:07 -0700, Andy Lutomirski wrote:
I still think that this is a property of the bus, not the device. x86
has such a mechanism, and this patch uses it transparently.
Right. A device driver should use the
On Wed, 2014-09-17 at 17:16 +0300, Michael S. Tsirkin wrote:
On Wed, Sep 17, 2014 at 08:02:31AM -0400, Benjamin Herrenschmidt wrote:
On Tue, 2014-09-16 at 22:22 -0700, Andy Lutomirski wrote:
On non-PPC systems, virtio_pci should use the DMA API. This fixes
virtio_pci on Xen. On PPC
On Wed, 2014-09-17 at 09:07 -0700, Andy Lutomirski wrote:
It shouldn't. That being said, at some point this problem will need
solving on PPC, and this patch doesn't help much, other than adding
the virtio_ring piece.
I'd really like to see the generic or arch IOMMU code handle this so
that
On Tue, 2014-09-16 at 22:22 -0700, Andy Lutomirski wrote:
On non-PPC systems, virtio_pci should use the DMA API. This fixes
virtio_pci on Xen. On PPC, using the DMA API would break things, so
we need to preserve the old behavior.
The big comment in this patch explains the considerations in
On Tue, 2014-09-02 at 16:11 -0700, Andy Lutomirski wrote:
I don't think so. I would argue that it's a straight-up bug for QEMU
to expose a physically-addressed virtio-pci device to the guest behind
an emulated IOMMU. QEMU may already be doing that on ppc64, but it
isn't on x86_64 or arm
On Wed, 2014-09-03 at 09:47 +0200, Paolo Bonzini wrote:
IOMMU support for x86 is going to go in this week.
But won't that break virtio on x86 ? Or will virtio continue bypassing
it ? IE, the guest side virtio doesn't expect an IOMMU and doesn't call
the dma mappings ops.
However, it is and
On Tue, 2014-09-02 at 16:42 -0700, Andy Lutomirski wrote:
But there aren't any ACPI systems with both virtio-pci and IOMMUs,
right? So we could say that, henceforth, ACPI systems must declare
whether virtio-pci devices live behind IOMMUs without breaking
backward compatibility.
I don't know
On Tue, 2014-09-02 at 17:32 -0700, Andy Lutomirski wrote:
I agree *except* that implementing it will be a real PITA and (I
think) can't be done without changing code in arch/. My patches plus
an ifdef powerpc will be functionally equivalent, just uglier.
So for powerpc, it's a 2 liner
On Fri, 2014-09-05 at 12:01 +0930, Rusty Russell wrote:
If the answers are both yes, then x86 is going to be able to use
virtio+IOMMU, so PPC looks like the odd one out.
Well, yes and no ... ppc will be able to do that too, it's just
pointless and will suck performances.
Additionally, it will
On Thu, 2014-09-04 at 19:57 -0700, Andy Lutomirski wrote:
There's a third option: try to make virtio-mmio work everywhere
(except s390), at least in the long run. This other benefits: it
makes minimal hypervisors simpler, I think it'll get rid of the limits
on the number of virtio devices in
On Mon, 2014-09-01 at 22:55 -0700, Andy Lutomirski wrote:
On x86, at least, I doubt that we'll ever see a physically addressed
PCI virtio device for which ACPI advertises an IOMMU, since any sane
hypervisor will just not advertise an IOMMU for the virtio device.
But are there arm64 or PPC
On Tue, 2014-09-02 at 16:56 -0400, Konrad Rzeszutek Wilk wrote:
On Wed, Sep 03, 2014 at 06:53:33AM +1000, Benjamin Herrenschmidt wrote:
On Mon, 2014-09-01 at 22:55 -0700, Andy Lutomirski wrote:
On x86, at least, I doubt that we'll ever see a physically addressed
PCI virtio device
On Tue, 2014-09-02 at 14:37 -0700, Andy Lutomirski wrote:
Let's take a step back from from the implementation. What is a driver
for a virtio PCI device (i.e. a PCI device with vendor 0x1af4)
supposed to do on ppc64?
Today, it's supposed to send guest physical addresses. We can make that
On Mon, 2014-09-01 at 10:39 -0700, Andy Lutomirski wrote:
Changes from v1:
- Using the DMA API is optional now. It would be nice to improve the
DMA API to the point that it could be used unconditionally, but s390
proves that we're not there yet.
- Includes patch 4, which fixes DMA
On Wed, 2014-08-27 at 20:40 +0930, Rusty Russell wrote:
Hi Andy,
This has long been a source of contention. virtio assumes that
the hypervisor can decode guest-physical addresses.
PowerPC, in particular, doesn't want to pay the cost of IOMMU
manipulations, and all
On Wed, 2014-08-27 at 13:52 +0200, Michael S. Tsirkin wrote:
For x86 as of QEMU 2.0 there's no iommu.
So a reasonable thing to do for that platform
might be to always use iommu *if it's there*.
My understanding is this isn't the case for powerpc?
All 64-bit powerpc have an iommu but not all
On Wed, 2013-02-13 at 15:28 +, Marc Zyngier wrote:
Well, the spec clearly says that the registers reflect the endianess of
the guest, and it makes sense: when performing the MMIO access, KVM
needs to convert between host and guest endianess.
It's actually a horrible idea :-)
What does
On Mon, 2012-06-04 at 14:15 +0930, Rusty Russell wrote:
Something along those lines is also needed for remote processors
which
access memory via an IOMMU (e.g. OMAP4's M3 and DSP).
Allocating the memory via the DMA API will seamlessly configure the
relevant IOMMU as needed, and will
On Wed, 2012-01-11 at 08:47 +, Stefan Hajnoczi wrote:
This is also an opportunity to stop using CPU physical addresses in
the ring and instead perform DMA like a normal PCI device (use bus
addresses).
Euh why ?
That would mean in many cases adding a layer of iommu, which will slow
On Wed, 2012-01-11 at 14:28 +, Stefan Hajnoczi wrote:
On Wed, Jan 11, 2012 at 9:10 AM, Benjamin Herrenschmidt
b...@kernel.crashing.org wrote:
On Wed, 2012-01-11 at 08:47 +, Stefan Hajnoczi wrote:
This is also an opportunity to stop using CPU physical addresses in
the ring
On Wed, 2012-01-11 at 17:12 +0200, Michael S. Tsirkin wrote:
This is similar to what we have now. But it's still buggy: e.g. if guest
updates MAC byte by byte, we have no way to know when it's done doing
so.
Do like real HW, there's plenty of options:
- (better) Have a command update MAC
1 - 100 of 125 matches
Mail list logo