Re: [PATCH v14 0/8] mm/gup: Introduce memfd_pin_folios() for pinning memfd folios

2024-05-23 Thread Gerd Hoffmann
On Thu, May 23, 2024 at 01:13:11PM GMT, Dave Airlie wrote:
> Hey
> 
> Gerd, do you have any time to look at this series again, I think at
> v14 we should probably consider landing it.

Phew.  Didn't follow recent MM changes closely, don't know much about
folios beyond LWN coverage.  The changes look sane to my untrained eye,
I wouldn't rate that a 'review' though.

The patch series structure looks a bit odd, with patch #5 adding hugetlb
support, with the functions added being removed again in patch #7 after
switching to folios.  But maybe regression testing the series is easier
that way ...

Acked-by: Gerd Hoffmann 

take care,
  Gerd



Re: [PATCH v2] drm/virtio: Add suppport for non-native buffer formats

2023-11-16 Thread Gerd Hoffmann
On Thu, Nov 16, 2023 at 02:16:54PM +0100, Geert Uytterhoeven wrote:
> When using virtgpu on a big-endian machine, e.g. powerpc QEMU:
> 
> virtio-pci :00:02.0: [drm] *ERROR* fbdev: Failed to setup generic 
> emulation (ret=-2)
> 
> or m68k/virt:
> 
> virtio-mmio virtio-mmio.125: [drm] *ERROR* fbdev: Failed to setup generic 
> emulation (ret=-2)
> 
> and the graphical display fails to come up.
> 
> Before, the call to drm_mode_addfb() caused a translation from a fourcc
> format (XR24) to a bpp/depth pair (32/24) to a potentially different fourcc
> format (BX24 on big-endian), due to the quirk processing in
> drm_driver_legacy_fb_format().  After, the original fourcc format (XR24)
> is passed unmodified.
> 
> However, the virtgpu DRM driver supports only a single format for its
> main plane: DRM_FORMAT_HOST_XRGB, which is XR24 on little-endian,
> and BX24 on big-endian.  I.e. on big-endian, virtgpu does not support
> XR24, which is the default DRM format, and must be supported by all
> drivers.  Before, this was reported, but didn't lead to a failure:
> 
> virtio-mmio virtio-mmio.125: [drm] bpp/depth value of 32/24 not supported
> virtio-mmio virtio-mmio.125: [drm] No compatible format found
> 
> As the core virtgpu driver and device support both XR24 and BX24 on both
> little-endian and big-endian just fine, fix this extending the list of
> supported formats for main plane and cursor plane to XR24/BX24 resp.
> AR24/BA24.
> 
> Fixes: 6ae2ff23aa43a0c4 ("drm/client: Convert drm_client_buffer_addfb() to 
> drm_mode_addfb2()")
> Reported-by: Christian Zigotzky 
> Closes: 
> https://lore.kernel.org/r/c47fba21-3ae9-4021-9f4a-09c2670eb...@xenosoft.de
> Suggested-by: Gerd Hoffmann 
> Signed-off-by: Geert Uytterhoeven 

Reviewed-by: Gerd Hoffmann 



Re: Fbdev issue after the drm updates 'drm-next-2023-10-31-1'

2023-11-15 Thread Gerd Hoffmann
On Wed, Nov 15, 2023 at 09:33:28AM +0100, Geert Uytterhoeven wrote:
> Hi Christian,
> 
> CC virtgpu
> 
> On Tue, Nov 14, 2023 at 10:45 AM Christian Zigotzky
>  wrote:
> > On 13 November 2023 at 01:48 pm, Geert Uytterhoeven wrote:
> > > I can confirm there is no graphics output with m68k/virt, and
> 
> Before the error message you reported:
> 
> virtio-mmio virtio-mmio.125: [drm] *ERROR* fbdev: Failed to setup
> generic emulation (ret=-2)
> 
> it also prints:
> 
> virtio-mmio virtio-mmio.125: [drm] bpp/depth value of 32/24 not supported
> virtio-mmio virtio-mmio.125: [drm] No compatible format found
> 
> Upon closer look, it turns out virtgpu is special in that its main
> plane supports only a single format: DRM_FORMAT_HOST_XRGB, which
> is XR24 on little-endian, and BX24 on big-endian.  I.e. on big-endian,
> virtgpu does not support XR24.

Driver and device support both XR24 and BX24 on both little endian and
big endian just fine.

Problem is both fbdev interfaces and the ADDFB ioctl specify the format
using bpp instead of fourcc, and advertising only one framebuffer format
-- in native byte order -- used to worked best, especially on bigendian
machines.

That was years ago though, IIRC predating the generic fbdev emulation,
so maybe it's time to revisit that.  Changing it should be as simple as
updating the format arrays:

--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -30,11 +30,13 @@
 #include "virtgpu_drv.h"
 
 static const uint32_t virtio_gpu_formats[] = {
-   DRM_FORMAT_HOST_XRGB,
+   DRM_FORMAT_XRGB,
+   DRM_FORMAT_BGRX,
 };
 
 static const uint32_t virtio_gpu_cursor_formats[] = {
-   DRM_FORMAT_HOST_ARGB,
+   DRM_FORMAT_ARGB,
+   DRM_FORMAT_BGRA,
 };
 
 uint32_t virtio_gpu_translate_format(uint32_t drm_fourcc)

HTH,
  Gerd



Re: [PATCH] drm/virtgpu: Replace dev_private by helper function

2023-06-21 Thread Gerd Hoffmann
On Tue, Jun 20, 2023 at 12:33:28PM +0200, Thomas Zimmermann wrote:
> Dereference struct drm_device.dev_private in the helper function
> do_virtio_gpu_device(). The dev_private field is deprecated and
> drivers are advised not ot use it. Encapsulating it in a helper
> function will help with a later removal. No functional changes.
> 
> Signed-off-by: Thomas Zimmermann 

Acked-by: Gerd Hoffmann 



Re: [PATCH] udmabuf: revert 'Add support for mapping hugepages (v4)'

2023-06-19 Thread Gerd Hoffmann
On Thu, Jun 08, 2023 at 01:49:27PM -0700, Mike Kravetz wrote:
> This effectively reverts commit 16c243e99d33 ("udmabuf: Add support
> for mapping hugepages (v4)").  Recently, Junxiao Chang found a BUG
> with page map counting as described here [1].  This issue pointed out
> that the udmabuf driver was making direct use of subpages of hugetlb
> pages.  This is not a good idea, and no other mm code attempts such use.
> In addition to the mapcount issue, this also causes issues with hugetlb
> vmemmap optimization and page poisoning.
> 
> For now, remove hugetlb support.
> 
> If udmabuf wants to be used on hugetlb mappings, it should be changed to
> only use complete hugetlb pages.  This will require different alignment
> and size requirements on the UDMABUF_CREATE API.
> 
> [1] 
> https://lore.kernel.org/linux-mm/20230512072036.1027784-1-junxiao.ch...@intel.com/
> 
> Fixes: 16c243e99d33 ("udmabuf: Add support for mapping hugepages (v4)")
> Cc: 
> Signed-off-by: Mike Kravetz 

Acked-by: Gerd Hoffmann 



Re: [PATCH] mm: fix hugetlb page unmap count balance issue

2023-06-19 Thread Gerd Hoffmann
On Mon, May 15, 2023 at 10:04:42AM -0700, Mike Kravetz wrote:
> On 05/12/23 16:29, Mike Kravetz wrote:
> > On 05/12/23 14:26, James Houghton wrote:
> > > On Fri, May 12, 2023 at 12:20 AM Junxiao Chang  
> > > wrote:
> > > 
> > > This alone doesn't fix mapcounting for PTE-mapped HugeTLB pages. You
> > > need something like [1]. I can resend it if that's what we should be
> > > doing, but this mapcounting scheme doesn't work when the page structs
> > > have been freed.
> > > 
> > > It seems like it was a mistake to include support for hugetlb memfds in 
> > > udmabuf.
> > 
> > IIUC, it was added with commit 16c243e99d33 udmabuf: Add support for mapping
> > hugepages (v4).  Looks like it was never sent to linux-mm?  That is 
> > unfortunate
> > as hugetlb vmemmap freeing went in at about the same time.  And, as you have
> > noted udmabuf will not work if hugetlb vmemmap freeing is enabled.
> > 
> > Sigh!
> > 
> > Trying to think of a way forward.
> > -- 
> > Mike Kravetz
> > 
> > > 
> > > [1]: 
> > > https://lore.kernel.org/linux-mm/20230306230004.1387007-2-jthough...@google.com/
> > > 
> > > - James
> 
> Adding people and list on Cc: involved with commit 16c243e99d33.
> 
> There are several issues with trying to map tail pages of hugetllb pages
> not taken into account with udmabuf.  James spent quite a bit of time trying
> to understand and address all the issues with the HGM code.  While using
> the scheme proposed by James, may be an approach to the mapcount issue there
> are also other issues that need attention.  For example, I do not see how
> the fault code checks the state of the hugetlb page (such as poison) as none
> of that state is carried in tail pages.
> 
> The more I think about it, the more I think udmabuf should treat hugetlb
> pages as hugetlb pages.  They should be mapped at the appropriate level
> in the page table.  Of course, this would impose new restrictions on the
> API (mmap and ioctl) that may break existing users.  I have no idea how
> extensively udmabuf is being used with hugetlb mappings.

User of this is qemu.  It can use the udmabuf driver to create host
dma-bufs for guest resources (virtio-gpu buffers), to avoid copying
data when showing the guest display in a host window.

hugetlb support is needed in case qemu guest memory is backed by
hugetlbfs.  That does not imply the virtio-gpu buffers are hugepage
aligned though, udmabuf would still need to operate on smaller chunks
of memory.  So with additional restrictions this will not work any
more for qemu.  I'd suggest to just revert hugetlb support instead
and go back to the drawing board.

Also not sure why hugetlbfs is used for guest memory in the first place.
It used to be a thing years ago, but with the arrival of transparent
hugepages there is as far I know little reason to still use hugetlbfs.

Vivek? Dongwon?

take care,
  Gerd



Re: [PATCH v5 09/44] drm: handle HAS_IOPORT dependencies

2023-05-22 Thread Gerd Hoffmann
  Hi,

> > There is also a direct and hard coded use in cirrus.c which according to
> > the comment is only necessary during resume.  Let's just skip this as
> > for example s390 which doesn't have I/O port support also doesen't
> > support suspend/resume.
> 
> I think we should consider making cirrus depend on HAS_IOPORT. The driver is
> only for qemu's cirrus emulation, which IIRC can only be enabled for i586.

Agree.  cirrus is x86 only (both i386 / x86_64 though).  Just require
HAS_IOPORT and be done with it.

> And it has all been deprecated long ago.

The fact that cirrus used to be the qemu default for many years is
pretty much the only reason it is still somewhat relevant today ...

take care,
  Gerd



Re: [PATCH v6 0/3] Add sync object UAPI support to VirtIO-GPU driver

2023-05-03 Thread Gerd Hoffmann
On Mon, May 01, 2023 at 06:38:45PM +0300, Dmitry Osipenko wrote:
> On 4/16/23 14:52, Dmitry Osipenko wrote:
> > We have multiple Vulkan context types that are awaiting for the addition
> > of the sync object DRM UAPI support to the VirtIO-GPU kernel driver:
> > 
> >  1. Venus context
> >  2. Native contexts (virtio-freedreno, virtio-intel, virtio-amdgpu)
> > 
> > Mesa core supports DRM sync object UAPI, providing Vulkan drivers with a
> > generic fencing implementation that we want to utilize.
> > 
> > This patch adds initial sync objects support. It creates fundament for a
> > further fencing improvements. Later on we will want to extend the VirtIO-GPU
> > fencing API with passing fence IDs to host for waiting, it will be a new
> > additional VirtIO-GPU IOCTL and more. Today we have several VirtIO-GPU 
> > context
> > drivers in works that require VirtIO-GPU to support sync objects UAPI.
> > 
> > The patch is heavily inspired by the sync object UAPI implementation of the
> > MSM driver.
> 
> Gerd, do you have any objections to merging this series?

No objections.  Can't spot any issues, but I also don't follow drm close
enough to be able to review the sync object logic in detail.

Acked-by: Gerd Hoffmann 

take care,
  Gerd



Re: [PATCH] drm/virtio: Enable fb damage clips property for the primary plane

2023-03-13 Thread Gerd Hoffmann
On Fri, Mar 10, 2023 at 01:59:42PM +0100, Javier Martinez Canillas wrote:
> Christian Hergert reports that the driver doesn't enable the property and
> that leads to always doing a full plane update, even when the driver does
> support damage clipping for the primary plane.
> 
> Don't enable it for the cursor plane, because its .atomic_update callback
> doesn't handle damage clips.
> 
> Reported-by: Christian Hergert 
> Signed-off-by: Javier Martinez Canillas 

Acked-by: Gerd Hoffmann 



Re: [PATCH v2] drm/virtio: Fix handling CONFIG_DRM_VIRTIO_GPU_KMS option

2023-03-06 Thread Gerd Hoffmann
On Mon, Mar 06, 2023 at 05:32:34PM +0300, Dmitry Osipenko wrote:
> VirtIO-GPU got a new config option for disabling KMS. There were two
> problems left unnoticed during review when the new option was added:
> 
> 1. The IS_ENABLED(CONFIG_DRM_VIRTIO_GPU_KMS) check in the code was
> inverted, hence KMS was disabled when it should be enabled and vice versa.
> 
> 2. The disabled KMS crashed kernel with a NULL dereference in
> drm_kms_helper_hotplug_event(), which shall not be invoked with a
> disabled KMS.
> 
> Fix the inverted config option check in the code and skip handling the
> VIRTIO_GPU_EVENT_DISPLAY sent by host when KMS is disabled in guest to fix
> the crash.
> 
> Fixes: 72122c69d717 ("drm/virtio: Add option to disable KMS support")
> Signed-off-by: Dmitry Osipenko 
> ---
> 
> Changelog:
> 
> v2: - Moved the "has_edid" under the "num_scanouts" condition, like was
>   suggested by Gerd Hoffmann.

Acked-by: Gerd Hoffmann 



Re: [PATCH v1] drm/virtio: Fix handling CONFIG_DRM_VIRTIO_GPU_KMS option

2023-03-05 Thread Gerd Hoffmann
> +++ b/drivers/gpu/drm/virtio/virtgpu_kms.c
> @@ -45,9 +45,11 @@ static void virtio_gpu_config_changed_work_func(struct 
> work_struct *work)
>   if (events_read & VIRTIO_GPU_EVENT_DISPLAY) {
>   if (vgdev->has_edid)
>   virtio_gpu_cmd_get_edids(vgdev);
> - virtio_gpu_cmd_get_display_info(vgdev);
> - virtio_gpu_notify(vgdev);
> - drm_helper_hpd_irq_event(vgdev->ddev);
> + if (vgdev->num_scanouts) {
> + virtio_gpu_cmd_get_display_info(vgdev);
> + virtio_gpu_notify(vgdev);
> + drm_helper_hpd_irq_event(vgdev->ddev);
> + }

I'd suggest to make the edid lines conditional too.

> - if (IS_ENABLED(CONFIG_DRM_VIRTIO_GPU_KMS) || !vgdev->num_scanouts) {
> + if (!IS_ENABLED(CONFIG_DRM_VIRTIO_GPU_KMS) || !vgdev->num_scanouts) {
>   DRM_INFO("KMS disabled\n");
>   vgdev->num_scanouts = 0;
>   vgdev->has_edid = false;

Doesn't make a difference because has_edid gets set to false here,
but IMHO it is less confusing that way.

take care,
  Gerd



Re: [PATCH v6] drm/virtio: Add option to disable KMS support

2023-03-02 Thread Gerd Hoffmann
On Thu, Mar 02, 2023 at 03:35:06PM -0800, Rob Clark wrote:
> From: Rob Clark 
> 
> Add a build option to disable modesetting support.  This is useful in
> cases where the guest only needs to use the GPU in a headless mode, or
> (such as in the CrOS usage) window surfaces are proxied to a host
> compositor.
> 
> As the modesetting ioctls are a big surface area for potential security
> bugs to be found (it's happened in the past, we should assume it will
> again in the future), it makes sense to have a build option to disable
> those ioctls in cases where they serve no legitimate purpose.
> 
> v2: Use more if (IS_ENABLED(...))
> v3: Also permit the host to advertise no scanouts
> v4: Spiff out commit msg
> v5: Make num_scanouts==0 and DRM_VIRTIO_GPU_KMS=n behave the same
> v6: Drop conditionally building virtgpu_display.c and early-out of
> it's init/fini fxns instead

Reviewed-by: Gerd Hoffmann 



Re: [PATCH v5] drm/virtio: Add option to disable KMS support

2023-03-01 Thread Gerd Hoffmann
On Thu, Mar 02, 2023 at 12:39:33AM +0300, Dmitry Osipenko wrote:
> On 3/1/23 21:54, Rob Clark wrote:
> >  /* virtgpu_display.c */
> > +#if defined(CONFIG_DRM_VIRTIO_GPU_KMS)
> >  int virtio_gpu_modeset_init(struct virtio_gpu_device *vgdev);
> >  void virtio_gpu_modeset_fini(struct virtio_gpu_device *vgdev);
> > +#else
> > +static inline int virtio_gpu_modeset_init(struct virtio_gpu_device *vgdev)
> > +{
> > +   return 0;
> > +}
> > +static inline void virtio_gpu_modeset_fini(struct virtio_gpu_device *vgdev)
> > +{
> > +}
> > +#endif
> 
> In v4 Gerd wanted to keep building the virtgpu_display.o and instead add
> the KSM config check to virtio_gpu_modeset_init/fini().

The main point is that the code workflow should be the same in both
cases.  The patch does that for virtio_gpu_modeset_init() but doesn't
for virtio_gpu_modeset_fini().

Return early in the functions (and drop the #ifdef here) is how I would
implement this, but I wouldn't insist on that, there are other ways to
solve this too ;)

take care,
  Gerd



Re: [PATCH v5] drm/virtio: Add option to disable KMS support

2023-03-01 Thread Gerd Hoffmann
  Hi,

> + if (vgdev->num_scanouts) {
> + ret = virtio_gpu_modeset_init(vgdev);

The call to virtio_gpu_modeset_fini() in the unregister
code path needs this too.

Otherwise the patch looks good to me, so with that fixed:
Acked-by: Gerd Hoffmann 

take care,
  Gerd



Re: [PATCH v4] drm/virtio: Add option to disable KMS support

2023-02-28 Thread Gerd Hoffmann
On Wed, Mar 01, 2023 at 03:37:24AM +0300, Dmitry Osipenko wrote:
> On 2/28/23 18:54, Rob Clark wrote:
> > From: Rob Clark 
> > 
> > Add a build option to disable modesetting support.  This is useful in
> > cases where the guest only needs to use the GPU in a headless mode, or
> > (such as in the CrOS usage) window surfaces are proxied to a host
> > compositor.
> > 
> > As the modesetting ioctls are a big surface area for potential security
> > bugs to be found (it's happened in the past, we should assume it will
> > again in the future), it makes sense to have a build option to disable
> > those ioctls in cases where they serve no legitimate purpose.
> > 
> > v2: Use more if (IS_ENABLED(...))
> > v3: Also permit the host to advertise no scanouts
> > v4: Spiff out commit msg
> > 
> > Signed-off-by: Rob Clark 
> > Reviewed-by: Dmitry Osipenko 
> > ---
> 
> Gerd, to give you some context on the v4.. we've chatted a bit more on
> the #dri-devel and concluded that config option is the most robust way
> of having KMS disabled from a security stand point. We would also want
> to have a per-driver option (and not global) because there are scenarios
> of using passthrough GPU + virtio-gpu in a guest, hence we would only
> want to toggle KMS for a particular driver.

IMHO both ways options to disable the KMS bits should work the same way.
With the current patch modeset_init() runs with num_scanouts == 0 but
doesn't with CONFIG_KMS=n.  There are also two different ways to tweak
driver_features.  Can we get rid of that please, for robustness reasons?

I'd suggest to have a is_kms_enabled() helper function (probably best as
inline so gcc can figure it is constant false for CONFIG_KMS=n and throw
away unreachable code).  Add "if (!is_kms_enabled()) return;" to
modeset_init() and modeset_fini() instead of stubbing them out.  Use the
drm_device->driver_features override in both cases.

Also the edid check can go away.  As already mentioned this is about a
device feature not a edid being present.

take care,
  Gerd



Re: [PATCH v3] drm/virtio: Add option to disable KMS support

2023-02-28 Thread Gerd Hoffmann
  Hi,

> + if (!vgdev->num_scanouts) {
> + /*
> +  * Having an EDID but no scanouts is non-sensical,
> +  * but it is permitted to have no scanouts and no
> +  * EDID (in which case DRIVER_MODESET and
> +  * DRIVER_ATOMIC are not advertised)
> +  */
> + if (vgdev->has_edid) {
> + DRM_ERROR("num_scanouts is zero\n");

That error message isn't very clear.

Also I'd suggest to just drop the edid check.  It's about commands
being supported by the device, not about the actual presence of an EDID
blob, so the check doesn't look very useful to me.

take care,
  Gerd



Re: [PATCH] drm/virtio: Add option to disable KMS support

2023-02-27 Thread Gerd Hoffmann
On Mon, Feb 27, 2023 at 07:40:11AM -0800, Rob Clark wrote:
> On Sun, Feb 26, 2023 at 10:38 PM Gerd Hoffmann  wrote:
> >
> > On Fri, Feb 24, 2023 at 10:02:24AM -0800, Rob Clark wrote:
> > > From: Rob Clark 
> > >
> > > Add a build option to disable modesetting support.  This is useful in
> > > cases where the guest only needs to use the GPU in a headless mode, or
> > > (such as in the CrOS usage) window surfaces are proxied to a host
> > > compositor.
> >
> > Why make that a compile time option?  There is a config option for the
> > number of scanouts (aka virtual displays) a device has.  Just set that
> > to zero (and fix the driver to not consider that configuration an
> > error).
> 
> The goal is to not advertise DRIVER_MODESET (and DRIVER_ATOMIC).. I
> guess that could be done based on whether there are any scanouts, but
> it would mean making the drm_driver struct non-const.

Apparently there is a drm_device->driver_features override,
(amdgpu uses that).  The driver could simply drop the DRIVER_MODESET and
DRIVER_ATOMIC bits in case no scanout is present instead of throwing an
error.

> And I think it is legitimate to allow the guest to make this choice,
> regardless of what the host decides to expose, since it is about the
> ioctl surface area that the guest kernel exposes to guest userspace.

I think it is a bad idea to make that a compile time option, I'd suggest
a runtime switch instead, for example a module parameter to ask the
driver to ignore any scanouts.

take care,
  Gerd



Re: [PATCH] drm/virtio: Add option to disable KMS support

2023-02-26 Thread Gerd Hoffmann
On Fri, Feb 24, 2023 at 10:02:24AM -0800, Rob Clark wrote:
> From: Rob Clark 
> 
> Add a build option to disable modesetting support.  This is useful in
> cases where the guest only needs to use the GPU in a headless mode, or
> (such as in the CrOS usage) window surfaces are proxied to a host
> compositor.

Why make that a compile time option?  There is a config option for the
number of scanouts (aka virtual displays) a device has.  Just set that
to zero (and fix the driver to not consider that configuration an
error).

take care,
  Gerd



Re: [PATCH 17/17] drm/cirrus: Use VGA macro constants to unblank

2023-02-20 Thread Gerd Hoffmann
On Mon, Feb 20, 2023 at 03:22:03PM +0100, Thomas Zimmermann wrote:
> Hi
> 
> Am 16.02.23 um 12:33 schrieb Gerd Hoffmann:
> > On Wed, Feb 15, 2023 at 05:15:17PM +0100, Thomas Zimmermann wrote:
> > > Set the VGA bit for unblanking with macro constants instead of magic
> > > values. No functional changes.
> > 
> > blank/unblank should work simliar to bochs (see commit 250e743915d4),
> > that is maybe a nice thing to add of you modernize the driver anyway.
> > 
> > take care,
> >Gerd
> > 
> 
> Do you have comments on the other patches?

Checked briefly only, looked sane overall.  Seems the blit and format
conversions helpers improved alot since I've added them initially (don't
follow drm that closely any more, busy with other stuff), nice to see
cirrus being updated to that and getting dirty tracking support.

Acked-by: Gerd Hoffmann 

take care,
  Gerd



Re: [PATCH 17/17] drm/cirrus: Use VGA macro constants to unblank

2023-02-16 Thread Gerd Hoffmann
On Thu, Feb 16, 2023 at 02:52:51PM +0200, Ville Syrjälä wrote:
> On Thu, Feb 16, 2023 at 01:03:02PM +0100, Thomas Zimmermann wrote:
> > Hi,
> > 
> > thanks for taking a look at the patches.
> > 
> > Am 16.02.23 um 12:33 schrieb Gerd Hoffmann:
> > > On Wed, Feb 15, 2023 at 05:15:17PM +0100, Thomas Zimmermann wrote:
> > >> Set the VGA bit for unblanking with macro constants instead of magic
> > >> values. No functional changes.
> > > 
> > > blank/unblank should work simliar to bochs (see commit 250e743915d4),
> > > that is maybe a nice thing to add of you modernize the driver anyway.
> > Yeah, it's the VGA PAS field. [1] But is it really called blanking? PAS 
> > controls palette access, but blanking is sounds more like DPMS.
> 
> Why aren't people just using the normal way of flipping the
> screen off bit in sequencer register 01?

qemu vga emulation doesn't check that bit ...

take care,
  Gerd



Re: [PATCH 17/17] drm/cirrus: Use VGA macro constants to unblank

2023-02-16 Thread Gerd Hoffmann
On Thu, Feb 16, 2023 at 01:03:02PM +0100, Thomas Zimmermann wrote:
> Hi,
> 
> thanks for taking a look at the patches.
> 
> Am 16.02.23 um 12:33 schrieb Gerd Hoffmann:
> > On Wed, Feb 15, 2023 at 05:15:17PM +0100, Thomas Zimmermann wrote:
> > > Set the VGA bit for unblanking with macro constants instead of magic
> > > values. No functional changes.
> > 
> > blank/unblank should work simliar to bochs (see commit 250e743915d4),
> > that is maybe a nice thing to add of you modernize the driver anyway.
> Yeah, it's the VGA PAS field. [1] But is it really called blanking? PAS
> controls palette access, but blanking is sounds more like DPMS.

Yes, strictly speaking it is not the same thing. DPMS blank will send
the monitor into suspend mode which this does not.  On virtual hardware
there isn't much of a difference though ;)

take care,
  Gerd



Re: [PATCH 17/17] drm/cirrus: Use VGA macro constants to unblank

2023-02-16 Thread Gerd Hoffmann
On Wed, Feb 15, 2023 at 05:15:17PM +0100, Thomas Zimmermann wrote:
> Set the VGA bit for unblanking with macro constants instead of magic
> values. No functional changes.

blank/unblank should work simliar to bochs (see commit 250e743915d4),
that is maybe a nice thing to add of you modernize the driver anyway.

take care,
  Gerd



Re: [PATCH v10 00/11] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

2023-01-27 Thread Gerd Hoffmann
On Thu, Jan 26, 2023 at 01:55:09AM +0300, Dmitry Osipenko wrote:
> Hello Thomas and Gerd,
> 
> On 1/9/23 00:04, Dmitry Osipenko wrote:
> > This series:
> > 
> >   1. Makes minor fixes for drm_gem_lru and Panfrost
> >   2. Brings refactoring for older code
> >   3. Adds common drm-shmem memory shrinker
> >   4. Enables shrinker for VirtIO-GPU driver
> >   5. Switches Panfrost driver to the common shrinker
> > 
> > Changelog:
> > 
> > v10:- Rebased on a recent linux-next.
> > 
> > - Added Rob's ack to MSM "Prevent blocking within shrinker loop" patch.
> > 
> > - Added Steven's ack/r-b/t-b for the Panfrost patches.
> > 
> > - Fixed missing export of the new drm_gem_object_evict() function.
> > 
> > - Added fixes tags to the first two patches that are making minor fixes,
> >   for consistency.
> 
> Do you have comments on this version? Otherwise ack will be appreciated.
> Thanks in advance!

Don't feel like signing off on the locking changes, I'm not that
familiar with the drm locking rules.  So someone else looking at them
would be good.  Otherwise the series and specifically the virtio changes
look good to me.

Acked-by: Gerd Hoffmann 

take care,
  Gerd



Re: [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs

2023-01-27 Thread Gerd Hoffmann
On Thu, Jan 26, 2023 at 03:24:30PM +0300, Dmitry Osipenko wrote:
> On 1/26/23 15:17, Gerd Hoffmann wrote:
> > On Mon, Jan 09, 2023 at 12:04:40AM +0300, Dmitry Osipenko wrote:
> >>  its own refcounting of vmaps, use it instead of drm-shmem
> >> counting. This change prepares drm-shmem for addition of memory shrinker
> >> support where drm-shmem will use a single dma-buf reservation lock for
> >> all operations performed over dma-bufs.
> > 
> > Likewise truncated?
> 
> Should be the email problem on yours side, please see [1][2] where the
> messages are okay.

Indeed, scratch the comments then.

take care,
  Gerd



Re: [PATCH v10 10/11] drm/virtio: Support memory shrinking

2023-01-27 Thread Gerd Hoffmann
On Mon, Jan 09, 2023 at 12:04:44AM +0300, Dmitry Osipenko wrote:
> Support generic drm-shmem memory shrinker and add new madvise IOCTL to
> the VirtIO-GPU driver. BO cache manager of Mesa driver will mark BOs as
> "don't need" using the new IOCTL to let shrinker purge the marked BOs on
> OOM, the shrinker will also evict unpurgeable shmem BOs from memory if
> guest supports SWAP file or partition.
> 
> Signed-off-by: Daniel Almeida 
> Signed-off-by: Dmitry Osipenko 

Acked-by: Gerd Hoffmann 



Re: [PATCH v10 06/11] drm/shmem-helper: Don't use vmap_use_count for dma-bufs

2023-01-26 Thread Gerd Hoffmann
On Mon, Jan 09, 2023 at 12:04:40AM +0300, Dmitry Osipenko wrote:
>  its own refcounting of vmaps, use it instead of drm-shmem
> counting. This change prepares drm-shmem for addition of memory shrinker
> support where drm-shmem will use a single dma-buf reservation lock for
> all operations performed over dma-bufs.

Likewise truncated?

take care,
  Gerd



Re: [PATCH v10 05/11] drm/shmem: Switch to use drm_* debug helpers

2023-01-26 Thread Gerd Hoffmann
On Mon, Jan 09, 2023 at 12:04:39AM +0300, Dmitry Osipenko wrote:
> f a multi-GPU system by using drm_WARN_*() and
> drm_dbg_kms() helpers that print out DRM device name corresponding
> to shmem GEM.

That commit message looks truncated ...

take care,
  Gerd



Re: [PATCH v1 2/2] drm/virtio: Add the hotplug_mode_update property for rescanning of modes

2023-01-11 Thread Gerd Hoffmann
  Hi,

> > > I think we need to do a bit of refactoring/documenting here first.
> > [Kasireddy, Vivek] Just for reference, here is Dave's commit that added this
> > property for qxl:
> > commit 4695b03970df378dcb93fe3e7158381f1e980fa2
> > Author: Dave Airlie 
> > Date:   Fri Oct 11 11:05:00 2013 +1000
> > 
> > qxl: add a connector property to denote hotplug should rescan modes.
> > 
> > So GNOME userspace has an issue with when it rescans for modes on 
> > hotplug
> > events, if the monitor has no EDID it assumes that nothing has changed 
> > on
> > EDID as with real hw we'd never have new modes without a new EDID, and 
> > they
> > kind off rely on the behaviour now, however with virtual GPUs we would
> > like to rescan the modes and get a new preferred mode on hotplug events
> > to handle dynamic guest resizing (where you resize the host window and 
> > the
> > guest resizes with it).
> 
> Ok this is just terrible. Because _anything_ without an EDID is impacted,
> and we're certainly not going to sprinkle this property all over gpu
> drivers just so Gnome takes the right path.

Oh, and (newer) virtio-gpu actually has EDIDs ...

take care,
  Gerd



Re: [PATCH v1 1/2] drm/virtio: Attach and set suggested_x/y properties for the connector

2023-01-10 Thread Gerd Hoffmann
  Hi,

> +static void virtio_gpu_update_output_position(struct virtio_gpu_output 
> *output)
> +{
> + struct drm_connector *connector = >conn;
> + struct drm_device *dev = connector->dev;
> +
> + drm_object_property_set_value(>base,
> + dev->mode_config.suggested_x_property, output->info.r.x);
> + drm_object_property_set_value(>base,
> + dev->mode_config.suggested_y_property, output->info.r.y);
> +}

This fails sparse checking

sparse warnings: (new ones prefixed by >>)
>> drivers/gpu/drm/virtio/virtgpu_vq.c:654:70: sparse: sparse: incorrect type 
>> in argument 3 (different base types) @@ expected unsigned long long 
>> [usertype] val @@ got restricted __le32 [usertype] x @@
   drivers/gpu/drm/virtio/virtgpu_vq.c:654:70: sparse: expected unsigned 
long long [usertype] val
   drivers/gpu/drm/virtio/virtgpu_vq.c:654:70: sparse: got restricted 
__le32 [usertype] x
>> drivers/gpu/drm/virtio/virtgpu_vq.c:656:70: sparse: sparse: incorrect type 
>> in argument 3 (different base types) @@ expected unsigned long long 
>> [usertype] val @@ got restricted __le32 [usertype] y @@
   drivers/gpu/drm/virtio/virtgpu_vq.c:656:70: sparse: expected unsigned 
long long [usertype] val
   drivers/gpu/drm/virtio/virtgpu_vq.c:656:70: sparse: got restricted 
__le32 [usertype] y

take care,
  Gerd



Re: [PATCH v1 2/2] drm/virtio: Add the hotplug_mode_update property for rescanning of modes

2023-01-10 Thread Gerd Hoffmann
On Fri, Jan 06, 2023 at 10:35:15AM +0100, Daniel Vetter wrote:
> On Fri, Jan 06, 2023 at 09:56:40AM +0100, Gerd Hoffmann wrote:
> > On Thu, Nov 17, 2022 at 05:30:54PM -0800, Vivek Kasireddy wrote:
> > > Setting this property will allow the userspace to look for new modes or
> > > position info when a hotplug event occurs.
> > 
> > This works just fine for modes today.
> > 
> > I assume this is this need to have userspace also check for position
> > info updates added by patch #1)?
> 
> What does this thing even do? Quick grep says qxl and vmwgfx also use
> this, but it's not documented anywhere, and it's also not done with any
> piece of common code. Which all looks really fishy.

It's again a virtualization-specific thing.  On physical hardware you
typically have no idea which of your two monitors stands left and which
stands right.  On virtual hardware the host knows how the two windows
for the two heads are arranged and can pass on that information to the
guest.  suggested_x/y properties added by patch #1 do pass that
information to userspace so the display server can arrange things
correctly without manual invention.

I have no clue though why this hotplug_mode_update property exists in
the first place and why mutter checks it.  IMHO mutter could just check
for suggested_x/y directly.

take care,
  Gerd



Re: [PATCH v1 2/2] drm/virtio: Add the hotplug_mode_update property for rescanning of modes

2023-01-06 Thread Gerd Hoffmann
On Thu, Nov 17, 2022 at 05:30:54PM -0800, Vivek Kasireddy wrote:
> Setting this property will allow the userspace to look for new modes or
> position info when a hotplug event occurs.

This works just fine for modes today.

I assume this is this need to have userspace also check for position
info updates added by patch #1)?

take care,
  Gerd



Re: [PATCH resend v2] drm/fourcc: Add missing big-endian XRGB1555 and RGB565 formats

2022-11-24 Thread Gerd Hoffmann
  Hi,

> > Supporting 16 bpp in the driver wouldn't be that much of a problem, but
> > processing the framebuffer on the host side when emulating a big endian
> > guest on a little endian host is painful.  I think I can't ask pixman to
> > do a conversation from DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN to
> > DRM_FORMAT_XRGB on a little endian machine.
> 
> Indeed. But you can do a quick 16-bit byteswap, and convert from
> DRM_FORMAT_RGB565 to DRM_FORMAT_XRGB?

Sure doable, but it's an extra step in a rarely tested code path ...

> BTW, does pixman support converting DRM_FORMAT_RGB565 to
> DRM_FORMAT_XRGB on a big-endian machine?

I don't think so.  When you can get the color bits with shifting and
masking pixman is happy.  For rgb565 (and xrgb1555) that works only on
native byte order.

take care,
  Gerd



Re: [PATCH resend v2] drm/fourcc: Add missing big-endian XRGB1555 and RGB565 formats

2022-11-23 Thread Gerd Hoffmann
  Hi,

> > > +#ifdef __BIG_ENDIAN
> >
> > Why do we need the #ifdef here? Iirc some hw has big endian flags in the
> > scanout registers, so could supprt this unconditionally if there's no
> > #ifdef around the format defines. Some drivers might then also want a
> > DRM_FORMAT_FOO_BE define to simplify tables and stuff, but that's more a
> > bikeshed.
> 
>  "Limit this to big-endian platforms, as there is currently no need
>  to support these formats on little-endian platforms."
> 
> Will anyone make use of this? In theory, all of the 16-bpp formats
> can have a big-endian counterpart.

Highly unlikely.  Dealing with 16-bpp formats in non-native byte order
is a PITA because it isn't enough to simply adjust the masks and shifts
to pick the correct bits and be done with it.

The qemu stdvga happens to have a register to switch framebuffer
byteorder (so both x64 and ppc are happy), and the bochs drm driver
actually supports that no matter what the cpu byte order is, but it
supports only DRM_FORMAT_XRGB + DRM_FORMAT_BGRX.

Supporting 16 bpp in the driver wouldn't be that much of a problem, but
processing the framebuffer on the host side when emulating a big endian
guest on a little endian host is painful.  I think I can't ask pixman to
do a conversation from DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN to
DRM_FORMAT_XRGB on a little endian machine.

take care,
  Gerd



Re: [PATCH] drm/qxl: fix the suspend/resume issue on qxl device

2022-09-07 Thread Gerd Hoffmann
On Wed, Sep 07, 2022 at 05:44:23PM +0800, Zongmin Zhou wrote:
> 
> From: Zongmin Zhou 
> 
> Details:
> Currently, when trying to suspend and resume with qxl device,
> there are some error messages after resuming,
> eventually caused to black screen and can't be recovered.

[ analysis snipped ]

> Let's fix this by reset io and remove the qxl_ring_init_hdr calling.

Pushed to drm-misc-next

thanks,
  Gerd



Re: [PATCH] drm/virtio: set fb_modifiers_not_supported

2022-09-07 Thread Gerd Hoffmann
On Wed, Aug 31, 2022 at 12:06:01PM -0700, Chia-I Wu wrote:
> Without this, the drm core advertises LINEAR modifier which is
> incorrect.
> 
> Also userspace virgl does not support modifiers.  For example, it causes
> chrome on ozone/drm to fail with "Failed to create scanout buffer".
> 
> Fixes: 2af104290da5 ("drm: introduce fb_modifiers_not_supported flag in 
> mode_config")
> Suggested-by: Shao-Chuan Lee 
> Signed-off-by: Chia-I Wu 

Pushed to drm-misc-next.

thanks,
  Gerd



[PATCH] drm/bochs: fix blanking

2022-09-06 Thread Gerd Hoffmann
VGA_IS1_RC is the color mode register (VGA_IS1_RM the one for monochrome
mode, note C vs. M at the end).  So when using VGA_IS1_RC make sure the
vga device is actually in color mode and set the corresponding bit in the
misc register.

Reproducible when booting VMs in UEFI mode with some edk2 versions (edk2
fix is on the way too).  Doesn't happen in BIOS mode because in that
case the vgabios already flips the bit.

Fixes: 250e743915d4 ("drm/bochs: Add screen blanking support")
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/tiny/bochs.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/tiny/bochs.c b/drivers/gpu/drm/tiny/bochs.c
index 08de13774862..a51262289aef 100644
--- a/drivers/gpu/drm/tiny/bochs.c
+++ b/drivers/gpu/drm/tiny/bochs.c
@@ -309,6 +309,8 @@ static void bochs_hw_fini(struct drm_device *dev)
 static void bochs_hw_blank(struct bochs_device *bochs, bool blank)
 {
DRM_DEBUG_DRIVER("hw_blank %d\n", blank);
+   /* enable color bit (so VGA_IS1_RC access works) */
+   bochs_vga_writeb(bochs, VGA_MIS_W, VGA_MIS_COLOR);
/* discard ar_flip_flop */
(void)bochs_vga_readb(bochs, VGA_IS1_RC);
/* blank or unblank; we need only update index and set 0x20 */
-- 
2.37.3



Re: [PATCH] udmabuf: Set ubuf->sg = NULL if the creation of sg table fails

2022-08-25 Thread Gerd Hoffmann
On Wed, Aug 24, 2022 at 11:35:22PM -0700, Vivek Kasireddy wrote:
> When userspace tries to map the dmabuf and if for some reason
> (e.g. OOM) the creation of the sg table fails, ubuf->sg needs to be
> set to NULL. Otherwise, when the userspace subsequently closes the
> dmabuf fd, we'd try to erroneously free the invalid sg table from
> release_udmabuf resulting in the following crash reported by syzbot:
> 
> general protection fault, probably for non-canonical address
> 0xdc00:  [#1] PREEMPT SMP KASAN

[ ... ]

> Reported-by: syzbot+c80e9ef5d8bb45894...@syzkaller.appspotmail.com
> Cc: Gerd Hoffmann 
> Signed-off-by: Vivek Kasireddy 

Pushed to drm-misc-next.

thanks,
  Gerd



Re: [PATCH] drm/virtio: Fix same-context optimization

2022-08-25 Thread Gerd Hoffmann
On Fri, Aug 12, 2022 at 03:40:00PM -0700, Rob Clark wrote:
> From: Rob Clark 
> 
> When VIRTGPU_EXECBUF_RING_IDX is used, we should be considering the
> timeline that the EB if running on rather than the global driver fence
> context.
> 
> Fixes: 85c83ea915ed ("drm/virtio: implement context init: allocate an array 
> of fence contexts")
> Signed-off-by: Rob Clark 

Pushed to drm-misc-next.

thanks,
  Gerd



Re: [PATCH v7 7/9] drm/virtio: Improve DMA API usage for shmem BOs

2022-07-19 Thread Gerd Hoffmann
On Wed, Jul 06, 2022 at 10:22:52AM +0300, Dmitry Osipenko wrote:
> On 7/6/22 10:13, Gerd Hoffmann wrote:
> >   Hi,
> > 
> >> Gerd, thank you very much! It's was indeed unclear to me how to test the
> >> MMIO GPU, but yours variant with microvm works! I was looking for trying
> >> aarch64 in the past, but it also was unclear how to do it since there is
> >> no DT support for the VirtIO-GPU, AFAICS.
> > 
> > aarch64 uses acpi by default (can be disabled via 'qemu -no-acpi').
> > Not fully sure about arm(v7).
> > 
> > Even with DT it should work because DT only describes the virtio-mmio
> > 'slots', not the actual virtio devices.
> > 
> >> There is no virgl support because it's a virtio-gpu-device and not
> >> virtio-gpu-device-gl that is PCI-only in Qemu. Hence everything seems good.
> > 
> > It's named 'virtio-gpu-gl-device'
> 
> Ah, thanks again! Just quickly tested virtio-gpu-gl-device and
> everything works too for MMIO GPU on microvm, including virgl and Xorg
> (glamor).
> 
> [drm] features: +virgl +edid -resource_blob -host_visible
> [drm] features: -context_init
> [drm] number of scanouts: 1
> [drm] number of cap sets: 2
> [drm] cap set 0: id 1, max-version 1, max-size 308
> [drm] cap set 1: id 2, max-version 2, max-size 696
> [drm] Initialized virtio_gpu 0.1.0 0 for LNRO0005:01 on minor 0
> virtio-mmio LNRO0005:01: [drm] drm_plane_enable_fb_damage_clips() not called

Cool.  Havn't found the time to try s390x, so I'm taking that as good
enough test that we don't have any unnoticed dependencies on pci.

Queued up.  I'll go over a few more pending patches, and assuming no
issues show up in testing this should land in drm-misc-next in a few
hours.

take care,
  Gerd



Re: [PATCH 1/3] drm/fourcc: Add missing big-endian XRGB1555 and RGB565 formats

2022-07-12 Thread Gerd Hoffmann
> > As described above DRM_FORMAT_HOST_RGB565 means bigendian on bigendian
> > hosts and little endian on little endian hosts.  Which is not correct
> > when your hardware does big endian no matter what.
> 
> But (a) drm_driver_legacy_fb_format() uses DRM_FORMAT_HOST_RGB565
> if quirk_addfb_prefer_host_byte_order is set,

Ah, right.  Missed that in 'git grep' output.  Given that traditional
fbdev behavior is to expect native byte order using
DRM_FORMAT_HOST_RGB565 there makes sense indeed.

Scratch my comment about it being unused then ;)

thanks,
  Gerd



Re: [PATCH 1/3] drm/fourcc: Add missing big-endian XRGB1555 and RGB565 formats

2022-07-12 Thread Gerd Hoffmann
On Tue, Jul 12, 2022 at 10:01:15AM +0200, Geert Uytterhoeven wrote:
> Hi Gerd,
> 
> > It IMHO is not applicable to any physical hardware.  It's used by
> > virtio-gpu where the supported format depends on the byte order
> > (it is argb in native byte order).  Only virtual hardware can
> > have that kind of behavior.
> >
> > And we can probably drop the DRM_FORMAT_HOST_* variants for 1555 and
> > 565, they are not used anywhere.
> 
> Atari DRM supports (big-endian) RGB565, so it uses
> DRM_FORMAT_HOST_RGB565.

Fixed big endian should use 'DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN'.

As described above DRM_FORMAT_HOST_RGB565 means bigendian on bigendian
hosts and little endian on little endian hosts.  Which is not correct
when your hardware does big endian no matter what.

take care,
  Gerd



Re: [PATCH 1/3] drm/fourcc: Add missing big-endian XRGB1555 and RGB565 formats

2022-07-12 Thread Gerd Hoffmann
  Hi,

> So adding support for bigendian formats to the driver shouldn't be
> much of a problem.  The vram will continue to run in little endian
> RGB565, the shadow will be big endian RGB565, and the driver must
> byteswap when copying.

For completeness: The other obvious option (for fbcon) would be to
handle the byteswapping in the generic drm fbdev emulation, which
would have the advantage that it would be more generic and would
not depend on the drm driver supporting the bigendian rgb565
formats ...

take care,
  Gerd



Re: [PATCH v2 6/8] drm/virtio: Use the hotspot properties from cursor planes

2022-07-12 Thread Gerd Hoffmann
On Mon, Jul 11, 2022 at 11:32:44PM -0400, Zack Rusin wrote:
> From: Zack Rusin 
> 
> Atomic modesetting got support for mouse hotspots via the hotspot
> properties. Port the legacy kms hotspot handling to the new properties
> on cursor planes.
> 
> Signed-off-by: Zack Rusin 
> Cc: David Airlie 
> Cc: Gerd Hoffmann 
> Cc: Gurchetan Singh 
> Cc: Chia-I Wu 
> Cc: Daniel Vetter 
> Cc: virtualizat...@lists.linux-foundation.org

Reviewed-by: Gerd Hoffmann 



Re: [PATCH v2 4/8] drm/qxl: Use the hotspot properties from cursor planes

2022-07-12 Thread Gerd Hoffmann
On Mon, Jul 11, 2022 at 11:32:42PM -0400, Zack Rusin wrote:
> From: Zack Rusin 
> 
> Atomic modesetting got support for mouse hotspots via the hotspot
> properties. Port the legacy kms hotspot handling to the new properties
> on cursor planes.
> 
> Signed-off-by: Zack Rusin 
> Cc: Dave Airlie 
> Cc: Gerd Hoffmann 
> Cc: Daniel Vetter 
> Cc: virtualizat...@lists.linux-foundation.org
> Cc: spice-de...@lists.freedesktop.org

Reviewed-by: Gerd Hoffmann 



Re: [PATCH 1/3] drm/fourcc: Add missing big-endian XRGB1555 and RGB565 formats

2022-07-12 Thread Gerd Hoffmann
On Mon, Jul 11, 2022 at 05:30:30PM +0200, Geert Uytterhoeven wrote:
> Hi Michel,
> 
> > > Cirrus is the only driver setting quirk_addfb_prefer_host_byte_order
> > > and supporting RGB565 or XRGB1555, but no one tried that on big-endian?
> > > Cirrus does not support DRM_FORMAT_RGB565 | DRM_FORMAT_BIG_ENDIAN
> > > in cirrus_fb_create, so you cannot get a graphical text console.
> > >
> > > Do we need these definitions on little-endian platforms, too?
> > > Would it be better to use "DRM_FORMAT_{XRGB1555,RGB565} |
> > > DRM_FORMAT_BIG_ENDIAN" instead of "DRM_FORMAT_HOST_{XRGB1555,RGB565}" in
> > > formats[]?
> >
> > The intention of DRM_FORMAT_HOST_* is that they are macros in
> > include/drm/drm_fourcc.h which just map to little endian formats
> > defined in drivers/gpu/drm/drm_fourcc.c. Since this is not possible
> > for big endian hosts for XRGB1555 or RGB565 (or any other format
> > with non-8-bit components), this isn't applicable here.

It IMHO is not applicable to any physical hardware.  It's used by
virtio-gpu where the supported format depends on the byte order
(it is argb in native byte order).  Only virtual hardware can
have that kind of behavior.

And we can probably drop the DRM_FORMAT_HOST_* variants for 1555 and
565, they are not used anywhere.

> I read that as that you prefer to write "DRM_FORMAT_{XRGB1555,RGB565}
> | DRM_FORMAT_BIG_ENDIAN" in formats[]?

Agree.

> > It's also doubtful that Cirrus hardware would access these formats
> > as big endian (drivers/gpu/drm/tiny/cirrus.c has no endianness
> > references at all, and the hardware was surely designed for x86
> > first and foremost).

Yes.  qemu mimics physical cirrus hardware which uses little endian.

> > Instead, fbcon (and user space) needs to convert to little endian
> > when using DRM_FORMAT_HOST_{XRGB1555,RGB565} with the cirrus driver
> > on big endian hosts.

Well, the cirrus driver uses shadow framebuffers anyway (the only
workable approach given it has 4M vram only), and it also supports
converting formats on-the-fly when copying from shadow to vram.

So adding support for bigendian formats to the driver shouldn't be
much of a problem.  The vram will continue to run in little endian
RGB565, the shadow will be big endian RGB565, and the driver must
byteswap when copying.

> Yeah, probably the cirrus driver can use some fixes...

I'd call it improvements.  It's not like the cirrus driver is broken.

take care,
  Gerd



Re: [PATCH v7 7/9] drm/virtio: Improve DMA API usage for shmem BOs

2022-07-06 Thread Gerd Hoffmann
  Hi,

> Gerd, thank you very much! It's was indeed unclear to me how to test the
> MMIO GPU, but yours variant with microvm works! I was looking for trying
> aarch64 in the past, but it also was unclear how to do it since there is
> no DT support for the VirtIO-GPU, AFAICS.

aarch64 uses acpi by default (can be disabled via 'qemu -no-acpi').
Not fully sure about arm(v7).

Even with DT it should work because DT only describes the virtio-mmio
'slots', not the actual virtio devices.

> There is no virgl support because it's a virtio-gpu-device and not
> virtio-gpu-device-gl that is PCI-only in Qemu. Hence everything seems good.

It's named 'virtio-gpu-gl-device'

take care,
  Gerd



Re: [PATCH v7 7/9] drm/virtio: Improve DMA API usage for shmem BOs

2022-07-05 Thread Gerd Hoffmann
  Hi,

> > Also note that pci is not the only virtio transport we have.
> 
> The VirtIO indeed has other transports, but only PCI is really supported
> in case of the VirtIO-GPU in kernel and in Qemu/crosvm, AFAICT. Hence
> only the PCI transport was tested.

qemu -M microvm \
  -global virtio-mmio.force-legacy=false \
  -device virtio-gpu-device

Gives you a functional virtio-gpu device on virtio-mmio.

aarch64 virt machines support both pci and mmio too.
s390x has virtio-gpu-ccw ...

take care,
  Gerd



Re: [PATCH v7 7/9] drm/virtio: Improve DMA API usage for shmem BOs

2022-07-05 Thread Gerd Hoffmann
  Hi,

> -  * So for the moment keep things as-is, with a bulky comment
> -  * for the next person who feels like removing this
> -  * drm_dev_set_unique() quirk.

Dragons lurking here.  It's not the first attempt to ditch this, and so
far all have been rolled back due to regressions.  Specifically Xorg is
notoriously picky if it doesn't find its expectations fulfilled.

Also note that pci is not the only virtio transport we have.

What kind of testing has this patch seen?

take care,
  Gerd



Re: [PATCH 0/6] drm: Add mouse cursor hotspot support to atomic KMS

2022-06-10 Thread Gerd Hoffmann
  Hi,

> > As Pekka mentionned, I'd also like to have a conversation of how far we 
> > want to
> > push virtualized driver features. I think KMS support is a good feature to 
> > have
> > to spin up a VM and have all of the basics working. However I don't think 
> > it's
> > a good idea to try to plumb an ever-growing list of fancy features
> > (seamless integration of guest windows into the host, HiDPI, multi-monitor,
> > etc) into KMS. You'd just end up re-inventing Wayland or RDP on top of KMS.
> > Instead of re-inventing these, just use RDP or waypipe or X11 forwarding
> > directly.

> > So I think we need to draw a line somewhere, and decide e.g. that 
> > virtualized
> > cursors are fine to add in KMS, but HiDPI is not.

What is the problem with HiDPI?  qemu generates standard edid blobs,
there should be no need to special-case virtualized drivers in any way.

What is the problem with multi-monitor?  That isn't much different than
physical multi-monitor either.

One little thing though:  On physical hardware you just don't know which
monitor is left and which is right until the user tells you.  In case of
a virtual multi-monitor setup we know how the two windows for the two
virtual monitors are arranged on the host and can pass that as hint to
the guest (not sure whenever *that* is the purpose of the
suggested_{x,y} properties).

> It's getting a bit far off-topic, but google cros team has an out-of-tree
> (at least I think it's not merged yet) wayland-virtio driver for exactly
> this use-case. Trying to move towards something like that for fancy
> virtualized setups sounds like the better approach indeed, with kms just
> as the bare-bones fallback option.

virtio-gpu got the ability to attach uuids to objects, to allow them
being identified on the host side.  So it could be that wayland-virtio
still uses kms for framebuffers (disclaimer: don't know how
wayland-virtio works in detail).  But, yes, all the scanout + cursor
handling would be out of the way, virtio-gpu would "only" handle fast
buffer sharing.

take care,
  Gerd



Re: [PATCH 0/6] drm: Add mouse cursor hotspot support to atomic KMS

2022-06-10 Thread Gerd Hoffmann
  Hi,

> >   4. make sure the hotspot property is only set on VIRTUAL_CURSOR planes
> >   and nothing else in the rebased patch series
> 
> Simon also mentioned on irc that these special planes must not expose the
> CRTC_X/Y property, since that doesn't really do much at all. Or is our
> understanding of how this all works for commandeered cursors wrong?

Depends.  In some cases the pointer position is a one-way host->guest
ticket (via tablet device).  In some cases the other direction works too
and the guest can warp the mouse pointer to another place on the host
display.  The guest can't easily figure whenever warp works or not
because that depends on host-side configuration the guest has no insight
to.

take care,
  Gerd



Re: [PATCH v2 2/2] drm/virtio: fence created per cursor/plane update

2022-06-08 Thread Gerd Hoffmann
On Fri, Jun 03, 2022 at 02:18:49PM -0700, Dongwon Kim wrote:
> Having one fence for a vgfb would cause conflict in case there are
> multiple planes referencing the same vgfb (e.g. Xorg screen covering
> two displays in extended mode) being flushed simultaneously. So it makes
> sence to use a separated fence for each plane update to prevent this.
> 
> vgfb->fence is not required anymore with the suggested code change so
> both prepare_fb and cleanup_fb are removed since only fence creation/
> freeing are done in there.

The fences are allocated and released in prepare_fb + cleanup_fb for a
reason: atomic_update must not fail.

I guess virtio-gpu must be fixed to use drm_plane_state->fence
correctly ...

take care,
  Gerd



Re: [PATCH 2/2] vfio/pci: Remove console drivers

2022-06-08 Thread Gerd Hoffmann
  Hi,

> You shouldn't have to copy any of the implementation of the aperture
> helpers.

That comes from the aperture helpers being part of drm ...

> For patch 2, the most trivial workaround is to instanciate struct drm_driver
> here and set the name field to 'vdev->vdev.ops->name'. In the longer term,
> the aperture helpers will be moved out of DRM and into a more prominent
> location. That workaround will be cleaned up then.

... but if the long-term plan is to clean that up properly anyway I
don't see the point in bike shedding too much on the details of some
temporary solution.

> Alternatively, drm_aperture_remove_conflicting_pci_framebuffers() could be
> changed to accept the name string as second argument, but that's quite a bit
> of churn within the DRM code.

Also pointless churn because you'll have the very same churn again when
moving the aperture helpers out of drm.

take care,
  Gerd



Re: [PATCH 0/6] drm: Add mouse cursor hotspot support to atomic KMS

2022-06-08 Thread Gerd Hoffmann
  Hi,

> > Typically there is a communication path from guest to host for pointer
> > movements (i.e. crtc_x + crtc_y updates), so the host knows where the
> > guest wants the cursor plane being placed.  So in case the pointer is
> > moved by other means (different input device, some application warping
> > the pointer, ...) the host can adapt.
> 
> Would it not be better to be explicit about it? To avoid fragile
> heuristics.

Explicit about what exactly?  Whenever pointer warps via crtc_x + crtc_y
update work or not?  Not so easy ...

> > Nevertheless behavior is not consistent here because in some cases the
> > feedback loop is not wired up end-to-end.  The spice protocol has a
> > message type for that, so pointer warps work.  The vnc protocol has not,
> > so they don't.

... for example qemu allows to enable both spice and vnc protocols.

> > It actually is a problem for multihead configurations though.  Having
> > some way to map input devices to scanouts would actually be helpful.
> > Years ago I checked how this works for touchscreens to see whenever it
> > is possible to leverage that for VMs somehow.  There wasn't some obvious
> > way, and I forgot the details meanwhile ...
> 
> Ah, that's the other way around, right? To tell guest OS which output
> an absolute input device is relative to?

Yes.

> Having a standard for naming outputs is hard it seems, and there is
> also the connector vs. monitor dilemma. I guess absolute input devices
> would usually want to be associated with the (real or virtual) monitor
> regardless of which (real or virtual) connector it is connected to.

Yes, this should be linked to the monitor not the connector.  qemu
learned to generate edid blobs a while back, so we actually have
virtual monitors and can create virtual monitor properties for them.

take care,
  Gerd



Re: [PATCH 0/2] Improve vfio-pci primary GPU assignment behavior

2022-06-08 Thread Gerd Hoffmann
  Hi,

> >> But also, this issue isn't something that only affects graphic devices,
> >> right? AFAIU from [1] and [2], the same issue happens if a PCI device
> >> has to be bound to vfio-pci but already was bound to a host driver.
> > 
> > Nope.  There is a standard procedure to bind and unbind pci drivers via
> > sysfs, using /sys/bus/pci/drivers/$name/{bind,unbind}.
> >
> 
> Yes, but the cover letter says:
> 
> "Users often employ kernel command line arguments to disable conflicting
> drivers or perform unbinding in userspace to avoid this"

Thats helpful at times to deal with driver and/or hardware quirks.
Example: Years ago drm drivers used to be horrible when it came to
unbind, leaving oopses and panics left & right when you tried (luckily
it works much better these days).

[ leaving this here for completeness, snipping the remaining reply,
  noting that we are on the same page now ]

thanks & take care,
  Gerd



Re: [PATCH 0/2] Improve vfio-pci primary GPU assignment behavior

2022-06-08 Thread Gerd Hoffmann
  Hi,

> But also, this issue isn't something that only affects graphic devices,
> right? AFAIU from [1] and [2], the same issue happens if a PCI device
> has to be bound to vfio-pci but already was bound to a host driver.

Nope.  There is a standard procedure to bind and unbind pci drivers via
sysfs, using /sys/bus/pci/drivers/$name/{bind,unbind}.

> The fact that DRM happens to have some infrastructure to remove devices
> that conflict with an aperture is just a coincidence.

No.  It's a consequence of firmware framebuffers not being linked to the
pci device actually backing them, so some other way is needed to find
and solve conflicts.

> The series [0] mentioned above, adds a sysfb_disable() that disables the
> Generic System Framebuffer logic that is what registers the framebuffer
> devices that are bound to these generic video drivers. On disable, the
> devices registered by sysfb are also unregistered.

As Alex already mentioned this might not have the desired effect on
systems with multiple GPUs (I think even without considering vfio-pci).

> That is, do you want to remove the {vesa,efi,simple}fb and simpledrm
> drivers or is there a need to also remove real fbdev and DRM drivers?

Boot framebuffers are the problem because they are neither visible nor
manageable in /sys/bus/pci.  For real fbdev/drm drivers the standard pci
unbind can be used.

take care,
  Gerd



Re: [PATCH 0/6] drm: Add mouse cursor hotspot support to atomic KMS

2022-06-07 Thread Gerd Hoffmann
> Why are pointer cursors misplaced on paravirtualized drivers?
> 
> It is because the paravirtualized drivers or VM viewers do *not* place
> the cursor plane at the CRTC_X, CRTC_Y position in the guest CRTC area.
> This is obvious: if CRTC_X, CRTC_Y were honoured, there would be no
> misplacement.
> 
> Instead, the VM stack plays clever tricks with cursor planes. I have
> understood only one of those tricks, and it goes something like this.
> To improve hand-eye coordination, that is to reduce the hand-to-eye
> response time a.k.a latency, the VM guest KMS driver relays the cursor
> plane separately to the VM viewer application.

Yes, the cursor is sent separately.

> The VM viewer application presents the cursor plane content by pushing
> them to the host window system as the pointer cursor.

Yes (i.e. gdk_window_set_cursor() on the host).

> This means the host window system will be autonomously moving the
> cursor plane image around, completely disregarding what the guest KMS
> client programmed into CRTC_X, CRTC_Y.

Yes.

That is combined with a virtual input device sending absolute
coordinates (i.e. tablet), so mouse clicks land at the correct place.
And that is the point where having the hotspot information is essential
on the host side.

> Given this UAPI contract, it is very easy for userspace to make the
> conclusion that a cursor plane is just another plane it can use for
> whatever it wants. Weston and wlroots indeed leverage this, putting
> also normal windows and other stuff to the cursor plane when they
> happen to fit.

virtual machine display devices typically offer small (64x64) cursor
planes, so unlike the 512x512 planes I've seen offered by i915 they are
hardly usable for anything but cursors.  Likewise additional overlay
planes typically not offered, so the classic primary+cursor setup is
pretty much the only reasonable option userspace has.

> I believe the solution has two parts:
> 
> - The guest KMS driver needs to know whether the guest userspace is
>   prepared for the cursor plane being commandeered. If userspace does
>   not indicate it is prepared for it, commandeering must not happen.

Yes.  That isn't much of a problem in practice though due to the limited
driver/device offerings outlined above.

> - Cursor hotspot needs new KMS properties, and a KMS client must set
>   them if the KMS client indicates it is prepared for cursor plane
>   commandeering.

Yes, and that is what hurts in practice and thus caused the blacklists
being created.

> There are further problems with cursor plane commandeering. The 2020
> email thread Simon linked to discusses the problem of pointer devices:
> if VM guest userspace takes pointer input from multiple sources, how
> will the VM stack know which virtual input device, if any, should drive
> the cursor plane position?

Typically there is a communication path from guest to host for pointer
movements (i.e. crtc_x + crtc_y updates), so the host knows where the
guest wants the cursor plane being placed.  So in case the pointer is
moved by other means (different input device, some application warping
the pointer, ...) the host can adapt.

Nevertheless behavior is not consistent here because in some cases the
feedback loop is not wired up end-to-end.  The spice protocol has a
message type for that, so pointer warps work.  The vnc protocol has not,
so they don't.

> To me the answer to this question seems it could be intimately tied to
> the first problem: commandeering the cursor plane is allowed only if
> guest userspace tells the guest KMS driver which input device the
> cursor plane shall be related to. If no input device is indicated,
> then commandeering must not happen.

Why require an input device?  I just don't see how that would help.

For allowing the host freely move around the cursor place
("commandeering") I do see the point in enforcing that from a design
point of view, although I doubt it'll buy us much in practice given we
have broken drivers in the wild so userspace will continue to work with
blacklists.

Having some capability to negotiate "commandeering" between kernel and
userspace certainly makes sense, so we can get of the black lists
long-term (although it'll probably takes a few years ...).

> I can understand if people do not want to tackle this question,
> because it probably has not been a problem yet.

On a standard guest this isn't a problem indeed because there is only
one input device and only one crtc.

It actually is a problem for multihead configurations though.  Having
some way to map input devices to scanouts would actually be helpful.
Years ago I checked how this works for touchscreens to see whenever it
is possible to leverage that for VMs somehow.  There wasn't some obvious
way, and I forgot the details meanwhile ...

take care,
  Gerd



Re: [PATCH 0/6] drm: Add mouse cursor hotspot support to atomic KMS

2022-06-07 Thread Gerd Hoffmann
  Hi,

> I don’t think I see how that fixes anything. In particular I don’t see
> a way of fixing the old user space at all. We require hotspot info,
> old user space doesn’t set it because there’s no way of setting it on
> atomic kms. Whether we expose cursor plane or not doesn’t change the
> fact that we still require the hotspot info.

Not exposing a cursor plane at all forces swcursor, which sidesteps the
hotspot issue at the expense of a rather sluggish pointer updates
because those suddenly require a round-trip to the guest.

take care,
  Gerd



Re: [PATCH 0/6] drm: Add mouse cursor hotspot support to atomic KMS

2022-06-03 Thread Gerd Hoffmann
  Hi,

> the legacy kms to atomic. This left virtualized drivers, all which
> are atomic, in a weird spot because all userspace compositors put
> those drivers on deny-lists for atomic kms due to the fact that mouse
> clicks were incorrectly routed, e.g:

> - all userspace code needs to hardcore a list of drivers which require
> hotspots because there's no way to query from drm "does this driver
> require hotspot"

So drivers move from one list to another.  Not ideal, but also not worse
than it was before, so:

Acked-by: Gerd Hoffmann 

for the qxl and virtio driver changes.

take care,
  Gerd



Re: 回复: Re: 回复: Re: 回复: Re: 回复: Re: [PATCH v1 1/2] drm/qxl: replace ioremap by ioremap_cache on arm64

2022-03-24 Thread Gerd Hoffmann
On Thu, Mar 24, 2022 at 06:34:02PM +0800, liuco...@kylinos.cn wrote:
>ok, thanks, a lot of our customer use qxl on x86 before, so it still need
>to supoort qxl on arm64.

Well, qxl isn't the best choice even on x86.  The main advantage it
offers (2d acceleration) is basically useless today because pretty much
everything moved on to use 3d acceleration instead.  So qxl ends up
being used as dumb framebuffer with software 3d rendering.

So, I'm still recommending to just use virtio-gpu ...

take care,
  Gerd



Re: 回复: Re: 回复: Re: 回复: Re: [PATCH v1 1/2] drm/qxl: replace ioremap by ioremap_cache on arm64

2022-03-24 Thread Gerd Hoffmann
On Thu, Mar 24, 2022 at 10:20:40AM +0100, Christian König wrote:
> Hi Cong,
> 
> when I understand Robin correctly all mapping (host, guest, kernel,
> userspace etc..) must have the same caching attributes unless you use the
> S2FWB feature introduced with Armv8.4.
> 
> If you don't follow those rules you usually run into coherency issues or
> even worse system hangs. So you not only need to adjust the kernel mappings,
> but also the function for userspace mappings to follow those rules.

That matches my understanding.

For qxl specifically: when using the xork qxl driver getting the
userspace mappings right is essential because userspace will write qxl
command buffers then.  When using the xorg modesetting driver or wayland
the worst thing happening would be display corruption because userspace
will only map dumb bo's for pixel data.

I'm wondering though why you are keen on getting qxl work instead of
just using virtio-gpu?

take care,
  Gerd



Re: [PATCH v1 1/2] drm/qxl: replace ioremap by ioremap_cache on arm64

2022-03-23 Thread Gerd Hoffmann
On Wed, Mar 23, 2022 at 09:45:13AM +, Robin Murphy wrote:
> On 2022-03-23 07:15, Christian König wrote:
> > Am 22.03.22 um 10:34 schrieb Cong Liu:
> > > qxl use ioremap to map ram_header and rom, in the arm64 implementation,
> > > the device is mapped as DEVICE_nGnRE, it can not support unaligned
> > > access.
> > 
> > Well that some ARM boards doesn't allow unaligned access to MMIO space
> > is a well known bug of those ARM boards.
> > 
> > So as far as I know this is a hardware bug you are trying to workaround
> > here and I'm not 100% sure that this is correct.
> 
> No, this one's not a bug. The Device memory type used for iomem mappings is
> *architecturally* defined to enforce properties like aligned accesses, no
> speculation, no reordering, etc. If something wants to be treated more like
> RAM than actual MMIO registers, then ioremap_wc() or ioremap_cache() is the
> appropriate thing to do in general (with the former being a bit more
> portable according to Documentation/driver-api/device-io.rst).

Well, qxl is a virtual device, so it *is* ram.

I'm wondering whenever qxl actually works on arm?  As far I know all
virtual display devices with (virtual) pci memory bars for vram do not
work on arm due to the guest mapping vram as io memory and the host
mapping vram as normal ram and the mapping attribute mismatch causes
caching troubles (only noticeable on real arm hardware, not in
emulation).  Did something change here recently?

take care,
  Gerd



Re: [PATCH 2/2] fbdev: Improve performance of sys_imageblit()

2022-02-17 Thread Gerd Hoffmann
> - for (j = k; j--; ) {
> - shift -= ppw;
> - end_mask = tab[(*src >> shift) & bit_mask];
> - *dst++ = (end_mask & eorx) ^ bgx;
> - if (!shift) {
> - shift = 8;
> - src++;
> + for (j = k; j; j -= jdecr, ++src) {
> + switch (ppw) {
> + case 4: /* 8 bpp */
> + *dst++ = colortab[(*src >> 4) & bit_mask];
> + *dst++ = colortab[(*src >> 0) & bit_mask];
> + break;
> + case 2: /* 16 bpp */
> + *dst++ = colortab[(*src >> 6) & bit_mask];
> + *dst++ = colortab[(*src >> 4) & bit_mask];
> + *dst++ = colortab[(*src >> 2) & bit_mask];
> + *dst++ = colortab[(*src >> 0) & bit_mask];
> + break;
> + case 1: /* 32 bpp */
> + *dst++ = colortab[(*src >> 7) & bit_mask];
> + *dst++ = colortab[(*src >> 6) & bit_mask];
> + *dst++ = colortab[(*src >> 5) & bit_mask];
> + *dst++ = colortab[(*src >> 4) & bit_mask];
> + *dst++ = colortab[(*src >> 3) & bit_mask];
> + *dst++ = colortab[(*src >> 2) & bit_mask];
> + *dst++ = colortab[(*src >> 1) & bit_mask];
> + *dst++ = colortab[(*src >> 0) & bit_mask];
> + break;
>   }

How about moving the switch out of the loop, i.e.

switch (ppw) {
case 4:
for (j = ...) {
*dst++ = colortab[(*src >> 4) & bit_mask];
*dst++ = colortab[(*src >> 0) & bit_mask];
}
[ ... ]
}

?

take care,
  Gerd



Re: [PATCH] MAINTAINERS: Add Helge as fbdev maintainer

2022-01-21 Thread Gerd Hoffmann
  Hi,

> What I still don't understand: why are you so keen on maintaining an
> interface that only serves the console? Nothing else uses fbdev these days.
> Why not improve DRM/userspace to the point where it fits your requirements?
> Long-term, the latter would make a lot more sense.

And note that it is *much* easier to write drm drivers these days.
We got alot of helpers, we got generic fbdev emulation and more.

If you are curious just compare the initial commit of the bochs drm
driver with the current code.  Initially the driver had to manage ttm
and fbdev and whatnot else.  These days writing a (non-accelerated) drm
driver is basically some boilerplate picking the helpers which work best
for your hardware, the code to actually program the hardware and that's
it.

The "new drivers should be drm" policy exists for years already btw,
exactly because of the unfixable fbdev API limitations.  The bochs drm
was a fbdev driver initially.  Never merged.  Got rewritten as drm
driver and that was merged instead.  In 2013, almost a decade ago.

And, yes, it very well might be that drm misses some piece here and
there for specific hardware, such as fbdev emulation not supporting
rgb332.  But I fully agree with Thomas here:  Improving drm is probably
a much better way to spend your time.  drm is where the development
happens.  fbdev is only kept alive.

take care,
  Gerd



Re: [PATCH 2/2] Revert "fbcon: Disable accelerated scrolling"

2022-01-20 Thread Gerd Hoffmann
  Hi,

> > So if this really has to come back then I think the pragmatic approach is
> > to do it behind a CONFIG_FBCON_ACCEL, default n, and with a huge warning
> > that enabling that shouldn't be done for any distro which only enables
> > firmware and drm fbdev drivers.
> 
> Thanks for coming back on this, but quite frankly I don't understand
> that request. How should that warning look like, something along:
> "BE WARNED: The framebuffer text console on your non-DRM supported
> graphic card will then run faster and smoother if you enable this option."
> That doesn't make sense. People and distros would want to enable that.

Nope.  Most distros want disable fbdev drivers rather sooner than later.
The fbdev drivers enabled in the fedora kernel today:

CONFIG_FB_VGA16=m
CONFIG_FB_VESA=y
CONFIG_FB_EFI=y
CONFIG_FB_SSD1307=m

CONFIG_FB_VESA + CONFIG_FB_EFI will go away soon, with simpledrm taking
over their role.

> And if a distro *just* has firmware and drm fbdev drivers enabled,
> none of the non-DRM graphic cards would be loaded anyway and this code
> wouldn't be executed anyway.

Yes, exactly.  That's why there is no point in compiling that code.

take care,
  Gerd



Re: [PATCH] MAINTAINERS: Add Helge as fbdev maintainer

2022-01-20 Thread Gerd Hoffmann
  Hi,

> > fbcon could do the same, i.e. render to fbdev in a 60Hz timer instead of
> > doing it synchronously.
> 
> Hopefully only the parts of the screen which need a redraw?

Sure.  drm fbdev emulation with shadow framebuffer tracks changes and
only flushes dirty areas to the real framebuffer.

fbcon could do the same when implementing lazy rendering, keeping track
of the chars/attrs which did actually change and only render those to
the (emulated or real) fbdev.

take care,
  Gerd



Re: [PATCH] MAINTAINERS: Add Helge as fbdev maintainer

2022-01-18 Thread Gerd Hoffmann
  Hi,

> > fbcon/fbdev emulation: RGB332 support must be added I think.  But both
> > argb888 and rgb565 are supported today, so it should not be hard to find
> > the places where you have to add some code to handle RGB332 too.
> 
> I'd expect that that framework is provided by DRM developers if there is the 
> wish
> to get rid of old fbdev and transition existing drivers over to use DRM.

Good luck with that.  Asking people to code up stuff they can't even
test due to lack of hardware isn't going to fly very well.

> > Yes.  The world shifted from 2d acceleration to 3d acceleration.  Modern
> > hardware simply has no classic blitter any more.  Which is a problem
> > when it comes to keeping scroll acceleration alive, it is already a very
> > niche use case and it will only become worse ...
> 
> For me it's Ok that the DRM drivers don't use 2d acceleration (as it is today
> with the arguments mentioned multiple times).
> But the patches broke existing fbdev acceleration which is available by
> the fbdev drivers. That's a big regression from point of fbdev.

Keeping it alive and working will be a tough challenge.  2d acceleration
is a thing of the past and the number of people which are able and
willing to put effort into supporting it will only decline.  It has been
a problem for a while already, basically it was dropped due to lack of
support ...

take care,
  Gerd



Re: [PATCH] MAINTAINERS: Add Helge as fbdev maintainer

2022-01-18 Thread Gerd Hoffmann
  Hi,

> > fbcon could do the same, i.e. render to fbdev in a 60Hz timer instead of
> > doing it synchronously.
> 
> Yeah, and if you use the shadow fb support in drm fbdev helpers,
> that's what you get. Maybe we should just make that the default, since
> that would also greatly simply stuff like modesetting support for
> fbdev.

Yep, that helps of course.  I was thinking more about the chars + attrs
+ font -> framebuffer step in the fbcon rendering pipeline though where
one could do something simliar to save even more cpu cycles.

take care,
  Gerd



Re: [PATCH] MAINTAINERS: Add Helge as fbdev maintainer

2022-01-18 Thread Gerd Hoffmann
On Tue, Jan 18, 2022 at 10:33:23AM +0200, Pekka Paalanen wrote:
> On Mon, 17 Jan 2022 19:47:39 +0100
> Sven Schnelle  wrote:
> 
> > I also tested the speed on my Thinkpad X1 with Intel graphics, and there
> > a dmesg with 919 lines one the text console took about 2s to display. In
> > x11, i measure 22ms. This might be unfair because encoding might be
> > different, but i cannot confirm the 'memcpy' is faster than hardware
> > blitting' point. I think if that would be the case, no-one would care
> > about 2D acceleration.
> 
> I think that is an extremely unfair comparison, because a graphical
> terminal app is not going to render every line of text streamed to it.
> It probably renders only the final view alone if you simply run
> 'dmesg', skipping the first 800-900 lines completely.

Probably more like "render on every vblank", but yes, unlike fbcon it
surely wouldn't render every single character sent to the terminal.

Also acceleration on modern hardware is more like "compose window
content using the 3d engine" than "use 2d blitter to scroll the window".

> Maybe fbcon should do the same when presented with a flood of text,
> but I don't know how or why it works like it works.

fbcon could do the same, i.e. render to fbdev in a 60Hz timer instead of
doing it synchronously.

take care,
  Gerd



Re: [PATCH] MAINTAINERS: Add Helge as fbdev maintainer

2022-01-18 Thread Gerd Hoffmann
On Tue, Jan 18, 2022 at 09:20:43AM +0100, Helge Deller wrote:
> On 1/18/22 07:29, Gerd Hoffmann wrote:
> >> Please correct me if I'm wrong, but text-console emulation/scrolling on 
> >> DRM is
> >> currently unaccelerated and bound to Truecolour modes only,
> >
> > Yes.  Adding support for formats beside argb to the drm fbcon
> > emulation shouldn't be that much of a problem though.
> 
> Really? Assuming a graphic card which runs with only 256 colors framebuffer
> is easily supported by DRM, and you can use fbcon without using lots of 
> memcpy()?

Driver: programming a fixed color cube palette, then use RGB332.

fbcon/fbdev emulation: RGB332 support must be added I think.  But both
argb888 and rgb565 are supported today, so it should not be hard to find
the places where you have to add some code to handle RGB332 too.

> > Acceleration is harder.  The scroll acceleration had issues nobody
> > addressed for years, and on modern hardware it is simply not used, which
> > is probably the reason nobody stepped up fixing things and it ended up
> > being dropped.
> 
> The DRM layer doesn't use scroll acceleration.
> More than 30 other existing fbdev drivers use it.

Yes.  The world shifted from 2d acceleration to 3d acceleration.  Modern
hardware simply has no classic blitter any more.  Which is a problem
when it comes to keeping scroll acceleration alive, it is already a very
niche use case and it will only become worse ...

> > Bringing it back is much more work than just reverting the commits removing 
> > it.
> 
> Reverting those commits have no effect on DRM's usage of fbcon.
> But reverting those commits bring back scroll acceleration for all others.
> I'm trying to find out which patches did apparently fixed such issues
> for the REDRAW case. If you have a pointer it would be helpful.

IIRC the code had a bunch of races and syzkaller flagged problems.
I didn't follow very closely though.

take care,
  Gerd



Re: [PATCH] udmabuf: validate ubuf->pagecount

2022-01-17 Thread Gerd Hoffmann
On Thu, Dec 30, 2021 at 05:26:49PM +0300, Pavel Skripkin wrote:
> Syzbot has reported GPF in sg_alloc_append_table_from_pages(). The
> problem was in ubuf->pages == ZERO_PTR.
> 
> ubuf->pagecount is calculated from arguments passed from user-space. If
> user creates udmabuf with list.size == 0 then ubuf->pagecount will be
> also equal to zero; it causes kmalloc_array() to return ZERO_PTR.
> 
> Fix it by validating ubuf->pagecount before passing it to
> kmalloc_array().
> 
> Fixes: fbb0de795078 ("Add udmabuf misc device")
> Reported-and-tested-by: syzbot+2c56b725ec547fa9c...@syzkaller.appspotmail.com
> Signed-off-by: Pavel Skripkin 
> ---

Pushed to drm-misc-next.

thanks,
  Gerd



Re: [PATCH] drm/virtio: Ensure that objs is not NULL in virtio_gpu_array_put_free()

2022-01-17 Thread Gerd Hoffmann
On Mon, Dec 13, 2021 at 07:31:22PM +0100, Roberto Sassu wrote:
> If virtio_gpu_object_shmem_init() fails (e.g. due to fault injection, as it
> happened in the bug report by syzbot), virtio_gpu_array_put_free() could be
> called with objs equal to NULL.
> 
> Ensure that objs is not NULL in virtio_gpu_array_put_free(), or otherwise
> return from the function.
> 
> Cc: sta...@vger.kernel.org # 5.13.x
> Signed-off-by: Roberto Sassu 
> Reported-by: syzbot+e9072e90624a31dfa...@syzkaller.appspotmail.com
> Fixes: 377f8331d0565 ("drm/virtio: fix possible leak/unlock 
> virtio_gpu_object_array")

Pushed to drm-misc-next.

thanks,
  Gerd



Re: [PATCH] MAINTAINERS: Add Helge as fbdev maintainer

2022-01-17 Thread Gerd Hoffmann
  Hi,

> Please correct me if I'm wrong, but text-console emulation/scrolling on DRM is
> currently unaccelerated and bound to Truecolour modes only,

Yes.  Adding support for formats beside argb to the drm fbcon
emulation shouldn't be that much of a problem though.

Acceleration is harder.  The scroll acceleration had issues nobody
addressed for years, and on modern hardware it is simply not used, which
is probably the reason nobody stepped up fixing things and it ended up
being dropped.  Bringing it back is much more work than just reverting
the commits removing it.

Also note that using a shadow framebuffer allows to decouple fbcon
updates and scanout framebuffer updates.  Can be used to speed up
things without depending on the 2d blitter.

take care,
  Gerd



Re: [PATCH] MAINTAINERS: Add Helge as fbdev maintainer

2022-01-17 Thread Gerd Hoffmann
On Mon, Jan 17, 2022 at 02:29:47PM +0100, Geert Uytterhoeven wrote:
> Hi Gerd,
> 
> On Mon, Jan 17, 2022 at 1:57 PM Gerd Hoffmann  wrote:
> > > b) to include new drivers (for old hardware) if they arrive (probably 
> > > happens rarely but there can be).
> > >I know of at least one driver which won't be able to support DRM
> >
> > Hmm?  I seriously doubt that.  There is always the option to use a
> > shadow framebuffer, then convert from standard drm formats to whatever
> > esoteric pixel format your hardware expects.
> >
> > Been there, done that.  Have a look at the cirrus driver.  The physical
> > hardware was designed in the early 90-ies, almost 30 years ago.  These
> > days it exists in virtual form only (qemu emulates it).  Thanks to the
> > drm driver it runs wayland just fine even though it has a bunch of
> > constrains dictated by the hardware design.
> 
> The Cirrus DRM driver supports TrueColor (RGB565/888 and ARGB)
> modes only.  The Cirrus fbdev driver also supports mochrome and 256
> color modes.
> 
> There exist some DRM drivers that do support DRM_FORMAT_C8, but none of
> the "tiny" ones do. Same for DRM_FORMAT_RGB{332,233}.

Adding that to the cirrus driver shouldn't be hard.  I'm wondering
whenever there are any userspace apps which would actually use that
though.

take care,
  Gerd



Re: [PATCH] MAINTAINERS: Add Helge as fbdev maintainer

2022-01-17 Thread Gerd Hoffmann
  Hi,

> b) to include new drivers (for old hardware) if they arrive (probably happens 
> rarely but there can be).
>I know of at least one driver which won't be able to support DRM

Hmm?  I seriously doubt that.  There is always the option to use a
shadow framebuffer, then convert from standard drm formats to whatever
esoteric pixel format your hardware expects.

Been there, done that.  Have a look at the cirrus driver.  The physical
hardware was designed in the early 90-ies, almost 30 years ago.  These
days it exists in virtual form only (qemu emulates it).  Thanks to the
drm driver it runs wayland just fine even though it has a bunch of
constrains dictated by the hardware design.

take care,
  Gerd



Re: [RFC 26/32] drm: handle HAS_IOPORT dependencies

2022-01-02 Thread Gerd Hoffmann
On Mon, Dec 27, 2021 at 05:43:11PM +0100, Niklas Schnelle wrote:
> In a future patch HAS_IOPORT=n will result in inb()/outb() and friends
> not being declared. We thus need to add HAS_IOPORT as dependency for
> those drivers using them. There is also a direct and hard coded use in
> cirrus.c which according to the comment is only necessary during resume.
> Let's just skip this as for example s390 which doesn't have I/O port
> support also doesen't support suspend/resume.

>  config DRM_BOCHS
>   tristate "DRM Support for bochs dispi vga interface (qemu stdvga)"
>   depends on DRM && PCI && MMU
> + depends on HAS_IOPORT
>   select DRM_KMS_HELPER
>   select DRM_VRAM_HELPER
>   select DRM_TTM

On devices with an mmio bar the driver works just fine without inb/outb,
see bochs->mmio checks in bochs.c

take care,
  Gerd



Re: [PATCH v3 03/10] drm/bochs: Replace module-init boiler-plate code with DRM helpers

2021-12-22 Thread Gerd Hoffmann
On Wed, Dec 22, 2021 at 09:28:24AM +0100, Javier Martinez Canillas wrote:
> -static int __init bochs_init(void)
> -{
> - if (drm_firmware_drivers_only() && bochs_modeset == -1)
> - return -EINVAL;

Also cleanup bochs_modeset?  I guess its not used any more after this
patch ...

take care,
  Gerd



Re: Reuse framebuffer after a kexec (amdgpu / efifb)

2021-12-11 Thread Gerd Hoffmann
On Fri, Dec 10, 2021 at 07:54:34PM -0500, Felix Kuehling wrote:
 
> Do you actually need to restore the exact boot-up mode? If you have the same
> framebuffer memory layout (width, height, bpp, stride) the precise display
> timing doesn't really matter. So we "just" need to switch to a mode that's
> compatible with the efifb framebuffer parameters and point the display
> engine at the efifb as the scan-out buffer.

That'll probably doable for a normal kexec but in case of a crashdump
kexec I don't think it is a good idea to touch the gpu using the driver
of the kernel which just crashed ...

take care,
  Gerd



Re: Reuse framebuffer after a kexec (amdgpu / efifb)

2021-12-09 Thread Gerd Hoffmann
  Hi,

> > The drivers are asic and platform specific.  E.g., the driver for
> > vangogh is different from renoir is different from skylake, etc.  The
> > display programming interfaces are asic specific.
> 
> Cool, that makes sense! But if you (or anybody here) know some of these
> GOP drivers, e.g. for the qemu/qxl device,

OvmfPkg/QemuVideoDxe in tianocore source tree.

> I'm just curious to see/understand how complex is the FW driver to
> just put the device/screen in a usable state.

Note that qemu has a paravirtual interface for vesa vga mode programming
where you basically program a handful of registers with xres, yres,
depth etc. (after resetting the device to put it into vga compatibility
mode) and you are done.

Initializing physical hardware is an order of magnitude harder than
that.

With qxl you could also go figure the current state of the hardware and
fill screen_info with that to get a working boot framebuffer in the
kexec'ed kernel.

Problem with this approach is this works only in case the framebuffer
happens to be in a format usable by vesafb/efifb.  So no modifiers
(tiling etc.) and continuous in physical address space.  That is true
for qxl.  With virtio-gpu it wouldn't work though (framebuffer can be
scattered), and I expect with most modern physical hardware it wouldn't
work either.

take care,
  Gerd



Re: [PATCH 0/2] virtgpu dummy events

2021-11-29 Thread Gerd Hoffmann
  Hi,

> On the series:
> 
> Reviewed-by: Daniel Vetter 
> 
> I'm assuming someone from Google can push this to drm-misc-fixes for you?

Thanks, pushed.

take care,
  Gerd



Re: [PATCH] drm/virtio: Fix NULL dereference error in virtio_gpu_poll

2021-11-08 Thread Gerd Hoffmann
On Thu, Nov 04, 2021 at 02:42:49PM -0700, Vivek Kasireddy wrote:
> When virgl is not enabled, vfpriv pointer would not be allocated.
> Therefore, check for a valid value before dereferencing.
> 
> Reported-by: Christian Zigotzky 
> Cc: Gurchetan Singh 
> Cc: Gerd Hoffmann 
> Signed-off-by: Vivek Kasireddy 
> ---
>  drivers/gpu/drm/virtio/virtgpu_drv.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.c 
> b/drivers/gpu/drm/virtio/virtgpu_drv.c
> index 749db18dcfa2..d86e1ad4a972 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.c
> @@ -163,10 +163,11 @@ static __poll_t virtio_gpu_poll(struct file *filp,
>   struct drm_file *drm_file = filp->private_data;
>   struct virtio_gpu_fpriv *vfpriv = drm_file->driver_priv;
>   struct drm_device *dev = drm_file->minor->dev;
> + struct virtio_gpu_device *vgdev = dev->dev_private;
>   struct drm_pending_event *e = NULL;
>   __poll_t mask = 0;
>  
> - if (!vfpriv->ring_idx_mask)
> + if (!vgdev->has_virgl_3d || !vfpriv || !vfpriv->ring_idx_mask)
>   return drm_poll(filp, wait);
>  
>   poll_wait(filp, _file->event_wait, wait);

Pushed to drm-misc-next-fixes.

thanks,
  Gerd



Re: [PATCH] drm/virtio: delay pinning the pages till first use

2021-11-03 Thread Gerd Hoffmann
On Tue, Nov 02, 2021 at 08:58:55AM -0700, Chia-I Wu wrote:
> On Tue, Nov 2, 2021 at 6:07 AM Gerd Hoffmann  wrote:
> >
> > On Tue, Nov 02, 2021 at 12:31:39PM +0100, Maksym Wezdecki wrote:
> > > From: mwezdeck 
> > >
> > > The idea behind the commit:
> > >   1. not pin the pages during resource_create ioctl
> > >   2. pin the pages on the first use during:
> > >   - transfer_*_host ioctl
> > > - map ioctl
> >
> > i.e. basically lazy pinning.  Approach looks sane to me.
> >
> > >   3. introduce new ioctl for pinning pages on demand
> >
> > What is the use case for this ioctl?
> > In any case this should be a separate patch.
> 
> Lazy pinning can be a nice optimization that userspace does not
> necessarily need to know about.  This patch however skips pinning for
> execbuffer ioctl and introduces a new pin ioctl instead.  That is a
> red flag.

Ah, so the pin ioctl is for buffers which need a pin for execbuffer.

Yep, that isn't going to fly that way, it'll break old userspace.

Lazy pinning must be opt-in, so new userspace which knows about
the pin ioctl can enable lazy pinning.  One possible way would
be to add a flag for the VIRTGPU_RESOURCE_CREATE ioctl, so lazy
pinning can be enabled per resource.

take care,
  Gerd



Re: [PATCH] drm/virtio: delay pinning the pages till first use

2021-11-02 Thread Gerd Hoffmann
On Tue, Nov 02, 2021 at 12:31:39PM +0100, Maksym Wezdecki wrote:
> From: mwezdeck 
> 
> The idea behind the commit:
>   1. not pin the pages during resource_create ioctl
>   2. pin the pages on the first use during:
>   - transfer_*_host ioctl
> - map ioctl

i.e. basically lazy pinning.  Approach looks sane to me.

>   3. introduce new ioctl for pinning pages on demand

What is the use case for this ioctl?
In any case this should be a separate patch.

> + struct virtio_gpu_object_array *objs;
> + struct virtio_gpu_object *bo;
> + struct virtio_gpu_object_shmem *shmem;
> +
> + objs = virtio_gpu_array_from_handles(file, _gpu_map->handle, 1);
> + if (objs == NULL)
> + return -ENOENT;
> +
> + bo = gem_to_virtio_gpu_obj(objs->objs[0]);
> + if (bo == NULL)
> + return -ENOENT;
> + 
> + shmem = to_virtio_gpu_shmem(bo);
> + if (shmem == NULL)
> + return -ENOENT;
> +
> + if (!shmem->pages) {
> + virtio_gpu_object_pin(vgdev, objs, 1);
> + }

Move this into virtio_gpu_object_pin(),
or create a helper function for it ...

> + objs = virtio_gpu_array_from_handles(file, _gpu_pin->handle, 1);
> + if (objs == NULL)
> + return -ENOENT;
> +
> + bo = gem_to_virtio_gpu_obj(objs->objs[0]);
> + if (bo == NULL)
> + return -ENOENT;
> + 
> + shmem = to_virtio_gpu_shmem(bo);
> + if (shmem == NULL)
> + return -ENOENT;
> +
> + if (!shmem->pages) {
> + return virtio_gpu_object_pin(vgdev, objs, 1);
> + }

... to avoid this code duplication?

> +int virtio_gpu_object_pin(struct virtio_gpu_device *vgdev,
> +   struct virtio_gpu_object_array *objs,
> +   int num_gem_objects)
> +{
> + int i, ret;
> +
> + for (i = 0; i < num_gem_objects; i++) {

> + ret = virtio_gpu_object_shmem_init(vgdev, bo, , );
> + if (ret != 0) {
> + return -EFAULT;
> + }
> +
> + virtio_gpu_object_attach(vgdev, bo, ents, nents);

I think it is enough to do the virtio_gpu_object_attach() call lazily.
virtio_gpu_object_shmem_init() should not actually allocate pages, that
only happens when virtio_gpu_object_attach() goes ask for a scatter
list.

take care,
  Gerd



Re: [PATCH] MAINTAINERS: add reviewers for virtio-gpu

2021-10-29 Thread Gerd Hoffmann
On Thu, Oct 28, 2021 at 02:34:46PM -0700, Chia-I Wu wrote:
> Add Gurchetan Singh and me as reviewers for virtio-gpu.
> 
> Signed-off-by: Chia-I Wu 
> Acked-by: Gurchetan Singh 
> Cc: David Airlie 
> Cc: Gerd Hoffmann 

Pushed to drm-misc-next.

thanks,
  Gerd



Re: drm/virtio: not pin pages on demand

2021-10-27 Thread Gerd Hoffmann
[ Cc'ing Gurchetan Singh ]

> Can we follow up on this issue?
> 
> The main pain point with your suggestion is the fact,
> that it will cause VirGL protocol breakage and we would
> like to avoid this.
> 
> Extending execbuffer ioctl and create_resource ioctl is
> more convenient than having the protocol broken.

Do you know at resource creation time whenever you need cpu access
or not?  IOW can we make that a resource property, so we don't have
pass around lists of objects on each and every execbuffer ioctl?

> Blob resources is not a solution, too, because QEMU doesn't
> support blob resources right now.
> 
> In ideal solution when both QEMU and crosvm support blob resources
> we can switch to blob resources for textures.

That'll only happen if someone works on it.
I will not be able to do that though.

> I would like to introduce changes gradually and not make a protocol
> breakage.

Well, opengl switching to blob resources is a protocol change anyway.
That doesn't imply things will break though.  We have capsets etc to
extend the protocol while maintaining backward compatibility.

> What do you think about that?

I still think that switching to blob resources would be the better
solution.  Yes, it's alot of work and not something which helps
short-term.  But adding a new API as interim solution isn't great
either.  So, not sure.  Chia-I Wu?  Gurchetan Singh?


While being at it:  Chia-I Wu & Gurchetan Singh, could you help
reviewing virtio-gpu kernel patches?  I think you both have a better
view on the big picture (with both mesa and virglrenderer) than I have.
Also for the crosvm side of things.  The procedure for that would be to
send a patch adding yourself to the virtio-gpu section of the
MAINTAINERS file, so scripts/get_maintainer.pl will Cc you on patches
submitted.

thanks,
  Gerd

> 
> Maksym
> 
> 
> On 10/22/21 10:44 AM, Maksym Wezdecki wrote:
> 
> > Once again with all lists and receivers. I'm sorry for that.
> >
> > On 10/21/21 6:42 PM, Chia-I Wu wrote:
> >> On Thu, Oct 21, 2021 at 4:52 AM Gerd Hoffmann  wrote:
> >>> On Thu, Oct 21, 2021 at 11:55:47AM +0200, Maksym Wezdecki wrote:
> >>>> I get your point. However, we need to make resource_create ioctl,
> >>>> in order to create corresponding resource on the host.
> >>> That used to be the case but isn't true any more with the new
> >>> blob resources.  virglrenderer allows to create gpu objects
> >>> via execbuffer.  Those gpu objects can be linked to a
> >>> virtio-gpu resources, but it's optional.  In your case you
> >>> would do that only for your staging buffer.  The other textures
> >>> (which you fill with a host-side copy from the staging buffer)
> >>> do not need a virtio-gpu resource in the first place.
> >> That's however a breaking change to the virgl protocol.  All virgl
> >> commands expect res ids rather than blob ids.
> >>
> >> Using VIRTGPU_BLOB_MEM_HOST3D could in theory work.  But there are a
> >> few occasions where virglrenderer expects there to be guest storage.
> >> There are also readbacks that we need to support.  We might end up
> >> using VIRTGPU_BLOB_MEM_HOST3D_GUEST, where pin-on-demand is still
> >> desirable.
> >>
> >> For this patch, I think the uapi change can be simplified.  It can be
> >> a new param plus a new field in drm_virtgpu_execbuffer
> >>
> >>   __u64 bo_flags; /* pointer to an array of size num_bo_handles, or NULL */
> >>
> >> The other changes do not seem needed.
> > My first approach was the same, as you mentioned. However, having "__u64 
> > bo_flags"
> > needs a for loop, where only few of the bo-s needs to be pinned - this has
> > performance implications. I would rather pass bo handles that should be 
> > pinned than
> > having a lot of flags, where only 1-2 bos needs to be pinned.
> >
> >>> take care,
> >>>   Gerd
> >>>

-- 



Re: drm/virtio: not pin pages on demand

2021-10-21 Thread Gerd Hoffmann
On Thu, Oct 21, 2021 at 11:55:47AM +0200, Maksym Wezdecki wrote:
> I get your point. However, we need to make resource_create ioctl,
> in order to create corresponding resource on the host.

That used to be the case but isn't true any more with the new
blob resources.  virglrenderer allows to create gpu objects
via execbuffer.  Those gpu objects can be linked to a
virtio-gpu resources, but it's optional.  In your case you
would do that only for your staging buffer.  The other textures
(which you fill with a host-side copy from the staging buffer)
do not need a virtio-gpu resource in the first place.

take care,
  Gerd



Re: drm/virtio: not pin pages on demand

2021-10-21 Thread Gerd Hoffmann
On Thu, Oct 21, 2021 at 09:44:45AM +0200, Maksym Wezdecki wrote:
> From: mwezdeck 
> 
> The idea behind the commit:
>   1. when resource is created, let user space decide
>  if resource should be pinned or not
>   2. transfer_*_host needs pinned memory. If it is not
>  pinned, then pin it.
>   3. during execbuffer, decide which bo handles should
>  be pinned based on the list provided by user space

When you never need cpu access to your gpu object there is
no need to create a resource in the first place.

take care,
  Gerd



Re: [PATCH] drm/virtio: fix the missed drm_gem_object_put() in virtio_gpu_user_framebuffer_create()

2021-10-11 Thread Gerd Hoffmann
On Sat, Oct 09, 2021 at 05:09:20PM +0800, Jing Xiangfeng wrote:
> virtio_gpu_user_framebuffer_create() misses to call drm_gem_object_put()
> in an error path. Add the missed function call to fix it.

Pushed to drm-misc-next.

thanks,
  Gerd



Re: [PATCH v3 00/12] Context types, v3

2021-09-29 Thread Gerd Hoffmann
> Gurchetan Singh (10):
>   virtio-gpu api: multiple context types with explicit initialization
>   drm/virtgpu api: create context init feature
>   drm/virtio: implement context init: track valid capabilities in a mask
>   drm/virtio: implement context init: track {ring_idx, emit_fence_info}
> in virtio_gpu_fence
>   drm/virtio: implement context init: plumb {base_fence_ctx, ring_idx}
> to virtio_gpu_fence_alloc
>   drm/virtio: implement context init: stop using drv->context when
> creating fence
>   drm/virtio: implement context init: allocate an array of fence
> contexts
>   drm/virtio: implement context init: handle
> VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK
>   drm/virtio: implement context init: add virtio_gpu_fence_event
>   drm/virtio: implement context init: advertise feature to userspace

Pushed to drm-misc-next.

thanks,
  Gerd



Re: [virtio-dev] Re: [PATCH v1 08/12] drm/virtio: implement context init: stop using drv->context when creating fence

2021-09-15 Thread Gerd Hoffmann
  Hi,

> > I guess you need to also update virtio_gpu_fence_event_process()
> > then?  It currently has the strict ordering logic baked in ...
> 
> The update to virtio_gpu_fence_event_process was done as a preparation a
> few months back:
> 
> https://cgit.freedesktop.org/drm/drm-misc/commit/drivers/gpu/drm/virtio/virtgpu_fence.c?id=36549848ed27c22bb2ffd5d1468efc6505b05f97

Ah, ok, missed the detail that the context check is already there.

thanks,
  Gerd



Re: [PATCH 2/2] drm/qxl: Add qxl dma fence release function

2021-09-15 Thread Gerd Hoffmann
On Tue, Sep 14, 2021 at 02:23:52AM -0400, bibo mao wrote:
> Add qxl dma fence release function, previously default dma fence
> release function is used, and fence pointer is used to free 
> the memory. With this patch, actual qxl release pointer is used
> to free memory, so that dma fence can put at any place of 
> struct qxl_release.

Why?  Is there a problem with struct dma_fence being the first
element of struct qxl_release?

take care,
  Gerd



Re: [PATCH 1/2] drm/qxl: User page size macro for qxl release bo

2021-09-15 Thread Gerd Hoffmann
On Tue, Sep 14, 2021 at 02:23:51AM -0400, bibo mao wrote:
> Some architectures have different default page size, this patch
> replaces hardcoded 4096 with PAGE_SIZE macro, since cmd bo size
> is page aligned.
> 
> Signed-off-by: bibo mao 

Pushed to drm-misc-next.

thanks,
  Gerd



Re: [PATCH] drm/bochs: add Bochs PCI ID for Simics model

2021-09-15 Thread Gerd Hoffmann
On Thu, Sep 09, 2021 at 06:06:55PM -0700, H. Peter Anvin (Intel) wrote:
> Current (and older) Simics models for the Bochs VGA used the wrong PCI
> vendor ID (0x4321 instead of 0x1234).  Although this can hopefully be
> fixed in the future, it is a problem for users of the current version,
> not the least because to update the device ID the BIOS has to be
> rebuilt in order to see BIOS output.
> 
> Add support for the 4321: device number in addition to the
> 1234: one.
> 
> Signed-off-by: H. Peter Anvin (Intel) 

Pusged to drm-misc-next.

thanks,
  Gerd



Re: [PATCH v1 08/12] drm/virtio: implement context init: stop using drv->context when creating fence

2021-09-14 Thread Gerd Hoffmann
On Wed, Sep 08, 2021 at 06:37:13PM -0700, Gurchetan Singh wrote:
> The plumbing is all here to do this.  Since we always use the
> default fence context when allocating a fence, this makes no
> functional difference.
> 
> We can't process just the largest fence id anymore, since it's
> it's associated with different timelines.  It's fine for fence_id
> 260 to signal before 259.  As such, process each fence_id
> individually.

I guess you need to also update virtio_gpu_fence_event_process()
then?  It currently has the strict ordering logic baked in ...

take care,
  Gerd



Re: [RFC v1 4/6] drm/virtio: Probe and implement VIRTIO_GPU_F_RELEASE_FENCE feature

2021-09-14 Thread Gerd Hoffmann
  Hi,

> --- a/include/uapi/linux/virtio_gpu.h
> +++ b/include/uapi/linux/virtio_gpu.h
> @@ -60,6 +60,8 @@
>   */
>  #define VIRTIO_GPU_F_RESOURCE_BLOB   3
>  
> +#define VIRTIO_GPU_F_RELEASE_FENCE4
> +
>  enum virtio_gpu_ctrl_type {
>   VIRTIO_GPU_UNDEFINED = 0,

Where is the virtio-spec update for that?

thanks,
  Gerd



Re: [PATCH v2] drm/virtio: support mapping exported vram

2021-08-16 Thread Gerd Hoffmann
On Fri, Aug 13, 2021 at 09:54:41AM +0900, David Stevens wrote:
> Implement virtgpu specific map_dma_buf callback to support mapping
> exported vram object dma-bufs. The dma-buf callback is used directly, as
> vram objects don't have backing pages and thus can't implement the
> drm_gem_object_funcs.get_sg_table callback.
> 
> Signed-off-by: David Stevens 

Pushed to drm-misc-next.

thanks,
  Gerd



Re: Why we didn't use embedded gem object for virtio gpu when making ttm bo a gem bo subclass?

2021-08-15 Thread Gerd Hoffmann
On Sun, Aug 15, 2021 at 09:51:02PM -0700, lepton wrote:
> Hi Gerd,
> 
> Thanks for your reply. I was aware of that change, but need a fix for
> 5.4 kernel as a temp solution for now.
> If the reason is just that you will move away from ttm soon,then I
> guess a CL like http://crrev.com/c/3092457 should
> work for 5.4, just hope I don't miss anything else.

Looks sane on a quick glance.

take care,
  Gerd



Re: Why we didn't use embedded gem object for virtio gpu when making ttm bo a gem bo subclass?

2021-08-15 Thread Gerd Hoffmann
On Fri, Aug 13, 2021 at 12:42:51PM -0700, lepton wrote:
> Hi Gerd,
> 
> We found a bug in 5.4 kernel and virtgpu_gem_prime_mmap doesn't work
> because it references vma_node in gem_base object while ttm code
> initialized vma_node in tbo.base object. I am wondering, in your
> original serial:
> https://patchwork.kernel.org/project/dri-devel/cover/20190805124310.3275-1-kra...@redhat.com/
> (drm/ttm: make ttm bo a gem bo subclass), why you changed to use
> embedded gem object for most gpu drivers but skipping virtio gpu? Is
> there some specific reason?

commit c66df701e783bc666593e6e665f13670760883ee
Author: Gerd Hoffmann 
Date:   Thu Aug 29 12:32:57 2019 +0200

drm/virtio: switch from ttm to gem shmem helpers

HTH,
  Gerd



Re: [PATCH] drm/virtio: set non-cross device blob uuid_state

2021-08-12 Thread Gerd Hoffmann
On Wed, Aug 11, 2021 at 01:04:01PM +0900, David Stevens wrote:
> Blob resources without the cross device flag don't have a uuid to share
> with other virtio devices. When exporting such blobs, set uuid_state to
> STATE_ERR so that virtgpu_virtio_get_uuid doesn't hang.
> 
> Signed-off-by: David Stevens 

Pushed to drm-misc-next.

thanks,
  Gerd



Re: [PATCH] drm/virtio: support mapping exported vram

2021-08-12 Thread Gerd Hoffmann
  Hi,

> +static struct sg_table *virtgpu_gem_map_dma_buf(
> + struct dma_buf_attachment *attach,
> + enum dma_data_direction dir)

checkpatch doesn't like that:

-:47: CHECK:OPEN_ENDED_LINE: Lines should not end with a '('
#47: FILE: drivers/gpu/drm/virtio/virtgpu_prime.c:46:
+static struct sg_table *virtgpu_gem_map_dma_buf(

> +{
> + struct drm_gem_object *obj = attach->dmabuf->priv;
> + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> +
> + if (virtio_gpu_is_vram(bo))
> + return virtio_gpu_vram_map_dma_buf(bo, attach->dev, dir);
> +
> + return drm_gem_map_dma_buf(attach, dir);
> +}
> +
> +static void virtgpu_gem_unmap_dma_buf(struct dma_buf_attachment *attach,
> +   struct sg_table *sgt,
> +   enum dma_data_direction dir)
> +{
> + struct drm_gem_object *obj = attach->dmabuf->priv;
> + struct virtio_gpu_object *bo = gem_to_virtio_gpu_obj(obj);
> +
> + if (virtio_gpu_is_vram(bo))
> + virtio_gpu_vram_unmap_dma_buf(attach->dev, sgt, dir);
> + else
> + drm_gem_unmap_dma_buf(attach, sgt, dir);
> +}

Minor nit:  Can we use the same logic in both functions?  I like the
virtgpu_gem_map_dma_buf variant (without else) more.

Otherwise looks sane to me.

thanks,
  Gerd



Re: [RFC v1 0/4] drm: Add support for DRM_CAP_DEFERRED_OUT_FENCE capability

2021-08-03 Thread Gerd Hoffmann
  Hi,

> > virtio_gpu_primary_plane_update() will send RESOURCE_FLUSH only for
> > DIRTYFB and both SET_SCANOUT + RESOURCE_FLUSH for page-flip, and I
> > think for the page-flip case the host (aka qemu) doesn't get the
> > "wait until old framebuffer is not in use any more" right yet.
> [Kasireddy, Vivek] As you know, with the GTK UI backend and this patch 
> series: 
> https://lists.nongnu.org/archive/html/qemu-devel/2021-06/msg06745.html
> we do create a sync file fd -- after the Blit -- and wait (adding it to 
> Qemu's main
> event loop) for it to ensure that the Guest scanout FB is longer in use on 
> the Host.
> This mechanism works in a similarly way for both frontbuffer DIRTYFB case and
> also the double-buffer case. 

Well, we don't explicitly wait on the old framebuffer.  Not fully sure
this is actually needed, maybe the command ordering (SET_SCANOUT goes
first) is enough.

> > So we'll need a host-side fix for that and a guest-side fix to switch
> > from a blocking wait on the fence to vblank events.
> [Kasireddy, Vivek] Do you see any concerns with the blocking wait?

Well, it's sync vs. async for userspace.

With the blocking wait the userspace ioctl (PAGE_FLIP or the atomic
version of it) will return when the host is done.

Without the blocking wait the userspace ioctl will return right away and
userspace can do something else until the host is done (and the vbland
event is sent to notify userspace).

> And, are you
> suggesting that we use a vblank timer?

I think we should send the vblank event when the RESOURCE_FLUSH fence
signals the host is done.

take care,
  Gerd



  1   2   3   4   5   6   7   8   9   10   >