On 19 July 2017 at 08:22, Linus Torvalds <torva...@linux-foundation.org> wrote:
> On Tue, Jul 18, 2017 at 2:21 PM, Dave Airlie <airl...@gmail.com> wrote:
>>
>> Oh and just FYI, the machine I've tested this on has an mgag200 server
>> graphics card backing the framebuffer, but with just efifb loaded.
>
> Yeah, it looks like it needs special hardware - and particularly the
> kind of garbage hardware that people only have on servers.
>
> Why do server people continually do absolute sh*t hardware? It's crap,
> crap, crap across the board outside the CPU. Nasty and bad hacky stuff
> that nobody else would touch with a ten-foot pole, and the "serious
> enterprise" people lap it up like it was ambrosia.
>
> It's not just "graphics is bad anyway since we don't care". It's all
> the things they ostensibly _do_ care about too, like the disk and the
> fabric infrastructure. Buggy nasty crud.

I've tried to reproduce now on:
Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz
using some address space from
02:00.0 3D controller: NVIDIA Corporation GM108M [GeForce 940MX] (rev a2)

And I don't see the issue.

I'll try and track down some more efi compatible mga or other wierd server chips
stuff if I can.

> Anyway, rant over. I wonder if we could show this without special
> hardware by just mapping some region that doesn't even have hardware
> in it as WC. Do we even expose the PAT settings to user space, though,
> or do we always have to have some fake module to create the PAT stuff?

I do wonder wtf the hw could be doing that would cause this, but I've no idea
how to tell what difference a write combined PCI transaction would have on the
bus side of things, and what the device could generate that would cause such
a horrible slowdown.

Dave.

Reply via email to