On Tue, 2020-10-06 at 23:26 +0200, Thomas Gleixner wrote:
> On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
> > From: David Woodhouse
> >
> > This is the maximum possible set of CPUs which can be used. Use it
> > to calculate the default affinity requested from __irq_alloc_descs()
> > by
On Tue, Oct 06, 2020 at 01:46:12PM -0700, Stefano Stabellini wrote:
> OK, this makes a lot of sense, and I like the patch because it makes the
> swiotlb interface clearer.
>
> Just one comment below.
>
> > +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t
> > orig_addr,
> >
On Tue, 2020-10-06 at 23:42 +0200, Thomas Gleixner wrote:
> On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
> > From: David Woodhouse
> >
> > This is the mask of CPUs to which IRQs can be delivered without
> > interrupt
> > remapping.
> >
> > +/* Mask of CPUs which can be targeted by
On Tue, Oct 06, 2020 at 09:19:32AM -0400, Jonathan Marek wrote:
> One example why drm/msm can't use DMA API is multiple page table support
> (that is landing in 5.10), which is something that definitely couldn't work
> with DMA API.
>
> Another one is being able to choose the address for
On Tue, 2020-10-06 at 23:54 +0200, Thomas Gleixner wrote:
> On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
>
> > From: David Woodhouse
> >
> > When interrupt remapping isn't enabled, only the first 255 CPUs can
>
> No, only CPUs with an APICid < 255
Ack.
> > receive external
On Tue, 2020-10-06 at 23:32 +0200, Thomas Gleixner wrote:
> What the heck? Why does this need a setter function which is exported?
> So that random driver writers can fiddle with it?
>
> The affinity mask restriction of an irq domain is already known when the
> domain is created.
It's exported
On Tue, Oct 06, 2020 at 10:56:04PM +0200, Tomasz Figa wrote:
> > Yes. And make sure the API isn't implemented when VIVT caches are
> > used, but that isn't really different from the current interface.
>
> Okay, thanks. Let's see if we can make necessary changes to the videobuf2.
>
> +Sergey
On 05/10/20 17:28, David Woodhouse wrote:
> From: David Woodhouse
>
> This allows the host to indicate that IOAPIC and MSI emulation supports
> 15-bit destination IDs, allowing up to 32Ki CPUs without remapping.
>
> Signed-off-by: David Woodhouse
> ---
> Documentation/virt/kvm/cpuid.rst |
On Wed, 2020-10-07 at 10:14 +0200, Paolo Bonzini wrote:
> Looks like the rest of the series needs some more work, but anyway:
>
> Acked-by: Paolo Bonzini
Thanks.
Yeah, I was expecting the per-irqdomain affinity support to take a few
iterations. But this part, still sticking with the current
On Wed, Oct 7, 2020 at 8:21 AM Christoph Hellwig wrote:
>
> On Tue, Oct 06, 2020 at 10:56:04PM +0200, Tomasz Figa wrote:
> > > Yes. And make sure the API isn't implemented when VIVT caches are
> > > used, but that isn't really different from the current interface.
> >
> > Okay, thanks. Let's see
On Wed, Oct 07, 2020 at 02:21:43PM +0200, Tomasz Figa wrote:
> My initial feeling is that it should work, but we'll give you a
> definitive answer once we prototype it. :)
>
> We might actually give it a try in the USB HCD subsystem as well, to
> implement usb_alloc_noncoherent(), as an
On 7 October 2020 13:59:00 BST, Thomas Gleixner wrote:
>On Wed, Oct 07 2020 at 08:48, David Woodhouse wrote:
>> On Tue, 2020-10-06 at 23:54 +0200, Thomas Gleixner wrote:
>>> On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
>> This is the case I called out in the cover letter:
>>
>>
On 07/10/20 10:59, David Woodhouse wrote:
> Yeah, I was expecting the per-irqdomain affinity support to take a few
> iterations. But this part, still sticking with the current behaviour of
> only allowing CPUs to come online at all if they can be reached by all
> interrupts, can probably go in
On Wed, 2020-10-07 at 13:15 +0200, Paolo Bonzini wrote:
> On 07/10/20 10:59, David Woodhouse wrote:
> > Yeah, I was expecting the per-irqdomain affinity support to take a few
> > iterations. But this part, still sticking with the current behaviour of
> > only allowing CPUs to come online at all if
On Fri, Sep 25, 2020 at 09:52:31AM +0800, Lu Baolu wrote:
>
> On 9/24/20 10:08 PM, David Woodhouse wrote:
> > From: David Woodhouse
> >
> > Instead of bailing out completely, such a unit can still be used for
> > interrupt remapping.
>
> Reviewed-by: Lu Baolu
Applied, thanks.
On Wed, Oct 07 2020 at 08:48, David Woodhouse wrote:
> On Tue, 2020-10-06 at 23:54 +0200, Thomas Gleixner wrote:
>> On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
> This is the case I called out in the cover letter:
>
> This patch series implements a per-domain "maximum affinity" set and
On Wed, Oct 07 2020 at 14:08, David Woodhouse wrote:
> On 7 October 2020 13:59:00 BST, Thomas Gleixner wrote:
>>On Wed, Oct 07 2020 at 08:48, David Woodhouse wrote:
>>> To fix *that* case, we really do need the whole series giving us per-
>>> domain restricted affinity, and to use it for those
On 7 October 2020 16:57:36 BST, Thomas Gleixner wrote:
>On Wed, Oct 07 2020 at 15:10, David Woodhouse wrote:
>> On Wed, 2020-10-07 at 15:37 +0200, Thomas Gleixner wrote:
>>> What is preventing you to change the function signature? But handing
>>> down irqdomain here is not cutting it. The
On Wed, Oct 07 2020 at 16:05, David Woodhouse wrote:
> On Wed, 2020-10-07 at 16:05 +0200, Thomas Gleixner wrote:
>> The top most MSI irq chip does not even have a compose function, neither
>> for the remap nor for the vector case. The composition is done by the
>> parent domain from the data which
On Wed, Oct 07 2020 at 15:10, David Woodhouse wrote:
> On Wed, 2020-10-07 at 15:37 +0200, Thomas Gleixner wrote:
>> What is preventing you to change the function signature? But handing
>> down irqdomain here is not cutting it. The right thing to do is to
>> replace 'struct irq_affinity_desc
On Wed, 2020-10-07 at 16:05 +0200, Thomas Gleixner wrote:
> On Wed, Oct 07 2020 at 14:08, David Woodhouse wrote:
> > On 7 October 2020 13:59:00 BST, Thomas Gleixner wrote:
> > > On Wed, Oct 07 2020 at 08:48, David Woodhouse wrote:
> > > > To fix *that* case, we really do need the whole series
On Wed, 2020-10-07 at 17:25 +0200, Thomas Gleixner wrote:
> It's clearly how the hardware works. MSI has a message store of some
> sorts and if the entry is enabled then the MSI chip (in PCI or
> elsewhere) will send exactly the message which is in that message
> store. It knows absolutely nothing
On Wed, Oct 07 2020 at 15:23, David Woodhouse wrote:
> On Wed, 2020-10-07 at 16:05 +0200, Thomas Gleixner wrote:
>> On Wed, Oct 07 2020 at 14:08, David Woodhouse wrote:
>> > On 7 October 2020 13:59:00 BST, Thomas Gleixner wrote:
>> > > On Wed, Oct 07 2020 at 08:48, David Woodhouse wrote:
>> > > >
On 7 October 2020 17:02:59 BST, Thomas Gleixner wrote:
>On Wed, Oct 07 2020 at 15:23, David Woodhouse wrote:
>> On Wed, 2020-10-07 at 16:05 +0200, Thomas Gleixner wrote:
>>> On Wed, Oct 07 2020 at 14:08, David Woodhouse wrote:
>>> > On 7 October 2020 13:59:00 BST, Thomas Gleixner
> wrote:
>>>
On Wed, Oct 07 2020 at 16:46, David Woodhouse wrote:
> The PCI MSI domain, HPET, and even the IOAPIC are just the things out
> there on the bus which might perform those physical address cycles. And
> yes, as you say they're just a message store sending exactly the
> message that was composed for
Hi Denis,
Thank you for your report.
wt., 6 paź 2020 o 17:17 Denis Odintsov napisał(a):
>
> Hi,
>
> > Am 15.07.2020 um 09:06 schrieb Tomasz Nowicki :
> >
> > The series is meant to support SMMU for AP806 and a workaround
> > for accessing ARM SMMU 64bit registers is the gist of it.
> >
> > For
On Wed, 2020-10-07 at 15:37 +0200, Thomas Gleixner wrote:
> On Wed, Oct 07 2020 at 08:19, David Woodhouse wrote:
> > On Tue, 2020-10-06 at 23:26 +0200, Thomas Gleixner wrote:
> > > On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
> > > > From: David Woodhouse
> > > >
> > > > This is the
On Wed, 2020-10-07 at 16:05 +0200, Thomas Gleixner wrote:
> > > The information has to property of the relevant irq domains and the
> > > hierarchy allows you nicely to retrieve it from there instead of
> > > sprinkling this all over the place.
> >
> > No. This is not a property of the parent
On Wed, Oct 07 2020 at 08:19, David Woodhouse wrote:
> On Tue, 2020-10-06 at 23:26 +0200, Thomas Gleixner wrote:
>> On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote:
>> > From: David Woodhouse
>> >
>> > This is the maximum possible set of CPUs which can be used. Use it
>> > to calculate the
https://bugzilla.kernel.org/show_bug.cgi?id=209321
Not much detail in the bugzilla yet, but apparently this started in
v5.8.0-rc1:
DMAR: [DMA Read] Request device [03:00.0] PASID fault addr fffd3000
[fault reason 06] PTE Read access is not set
Currently assigned to Driver/PCI, but
On Wed, 7 Oct 2020, Christoph Hellwig wrote:
> On Tue, Oct 06, 2020 at 01:46:12PM -0700, Stefano Stabellini wrote:
> > OK, this makes a lot of sense, and I like the patch because it makes the
> > swiotlb interface clearer.
> >
> > Just one comment below.
> >
>
> > > +phys_addr_t
On Wed, 2020-10-07 at 19:23 +0200, Thomas Gleixner wrote:
> > It so happens that in Linux, we don't really architect the software
> > like that. So each of the PCI MSI domain, HPET, and IOAPIC have their
> > *own* message composer which has the same limits and composes basically
> > the same
On Wed, Oct 07 2020 at 17:11, David Woodhouse wrote:
> On 7 October 2020 16:57:36 BST, Thomas Gleixner wrote:
>>There is not lot's of nastiness.
>
> OK, but I think we do have to cope with the fact that the limit is
> dynamic, and a CPU might be added which widens the mask. I think
> that's
33 matches
Mail list logo