On Sat, Jun 27, 2020 at 06:13:43PM +0200, Marion & Christophe JAILLET wrote:
> I'm sorry, but I will not send pci_ --> dma_ conversion for this driver.
> I'm a bit puzzled by some choice of GFP_KERNEL and GFP_ATOMIC that not all
> that obvious to me.
>
> I'll try to send some patches for other eas
Le 24/06/2020 à 09:38, Christoph Hellwig a écrit :
Hi Guenter,
can you try the patch below? This just converts the huge allocations
in mptbase to use GFP_KERNEL. Christophe (added to Cc) actually has
a scripted conversion for the rest that he hasn't posted yet, so I'll
aim for the minimal ver
On Wed, Jun 24, 2020 at 09:38:15AM +0200, Christoph Hellwig wrote:
> Hi Guenter,
>
> can you try the patch below? This just converts the huge allocations
> in mptbase to use GFP_KERNEL. Christophe (added to Cc) actually has
> a scripted conversion for the rest that he hasn't posted yet, so I'll
Hi Guenter,
can you try the patch below? This just converts the huge allocations
in mptbase to use GFP_KERNEL. Christophe (added to Cc) actually has
a scripted conversion for the rest that he hasn't posted yet, so I'll
aim for the minimal version here.
diff --git a/drivers/message/fusion/mptba
On Mon, Jun 22, 2020 at 05:07:55PM +0100, Robin Murphy wrote:
> Another angle, though, is to question why this driver is making such a
> large allocation with GFP_ATOMIC in the first place. At a glance it looks
> like there's no reason at all other than that it's still using the legacy
> pci_all
On 2020-06-21 21:20, David Rientjes wrote:
On Sun, 21 Jun 2020, Guenter Roeck wrote:
This patch results in a boot failure in some of my powerpc boot tests,
specifically those testing boots from mptsas1068 devices. Error message:
mptsas :00:02.0: enabling device ( -> 0002)
mptbase: ioc0
On Sun, 21 Jun 2020, Guenter Roeck wrote:
> >> This patch results in a boot failure in some of my powerpc boot tests,
> >> specifically those testing boots from mptsas1068 devices. Error message:
> >>
> >> mptsas :00:02.0: enabling device ( -> 0002)
> >> mptbase: ioc0: Initiating bringup
>
On 6/21/20 1:35 AM, Geert Uytterhoeven wrote:
> Hi Günter,
>
> On Sat, Jun 20, 2020 at 10:09 PM Guenter Roeck wrote:
>> On Mon, Jun 08, 2020 at 03:22:17PM +0200, Geert Uytterhoeven wrote:
>>> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
>>> memory pools are much larger th
Hi Günter,
On Sat, Jun 20, 2020 at 10:09 PM Guenter Roeck wrote:
> On Mon, Jun 08, 2020 at 03:22:17PM +0200, Geert Uytterhoeven wrote:
> > On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
> > memory pools are much larger than intended (e.g. 2 MiB instead of 128
> > KiB on a 2
On Mon, Jun 08, 2020 at 03:22:17PM +0200, Geert Uytterhoeven wrote:
> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
> memory pools are much larger than intended (e.g. 2 MiB instead of 128
> KiB on a 256 MiB system).
>
> Fix this by correcting the calculation of the number o
Thanks,
applied to the dma-mapping tree for 5.8.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Mon, 8 Jun 2020, Geert Uytterhoeven wrote:
> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
> memory pools are much larger than intended (e.g. 2 MiB instead of 128
> KiB on a 256 MiB system).
>
> Fix this by correcting the calculation of the number of GiBs of RAM in
> th
On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
memory pools are much larger than intended (e.g. 2 MiB instead of 128
KiB on a 256 MiB system).
Fix this by correcting the calculation of the number of GiBs of RAM in
the system. Invert the order of the min/max operations, to k
13 matches
Mail list logo