CISTI'2019 - Doctoral Symposium | Coimbra, Portugal

2019-01-18 Thread Maria Lemos
* Published in IEEE Xplore and indexed by ISI, Scopus, etc.


---
Doctoral Symposium of CISTI'2019 - 14th Iberian Conference on Information 
Systems and Technologies
   Coimbra, Portugal, 19 - 22 
June 2019
   
http://www.cisti.eu/ 




The purpose of CISTI'2019’s Doctoral Symposium is to provide graduate students 
a setting where they can, informally, expose and discuss their work, collecting 
valuable expert opinions and sharing new ideas, methods and applications. The 
Doctoral Symposium is an excellent opportunity for PhD students to present and 
discuss their work in a Workshop format. Each presentation will be evaluated by 
a panel composed by at least three Information Systems and Technologies experts.



Contributions Submission

The Doctoral Symposium is opened to PhD students whose research area includes 
the themes proposed for this Conference. Submissions must include an extended 
abstract (maximum 4 pages), following the Conference style guide 
. All selected contributions will be 
handed out along with the Conference Proceedings, in CD with an ISBN. These 
contributions will be available in the IEEE Xplore 
 Digital 
Library and will be sent for indexing in ISI, Scopus, EI-Compendex, INSPEC and 
Google Scholar.

Submissions must include the field, the PhD institution and the number of 
months devoted to the development of the work. Additionally, they should 
include in a clear and succinct manner:

•The problem approached and its significance or relevance
•The research objectives and related investigation topics
•A brief display of what is already known
•A proposed solution methodology for the problem
•Expected results



Important Dates

Paper submission: February 10, 2019

Notification of acceptance: March 17, 2019

Submission of accepted papers: March 31, 2019

Payment of registration, to ensure the inclusion of an accepted paper in the 
conference proceedings: April 1, 2019



Organizing Committee

Álvaro Rocha, Universidade de Coimbra

Manuel Pérez Cota, Universidad de Vigo



Scientific Committee

Manuel Pérez Cota, Universidad de Vigo (Chair)

A. Augusto Sousa, FEUP, Universidade do Porto

Adolfo Lozano Tello, Universidad de Extremadura

Alma María Gómez Rodríguez, Universidade de Vigo

Álvaro Rocha, Universidade de Coimbra

Ana Amélia Carvalho, Universidade de Coimbra

Ana Maria Ramalho Correia, NOVA Information Management School

António Coelho, FEUP, Universidade do Porto

Antonio Garcia-Loureiro, Universidad de Santiago de Compostela

Arnaldo Martins, Universidade de Aveiro

Arturo Méndez Penín, Universidade de Vigo

Bráulio Alturas, ISCTE - Insituto Universitário de Lisboa

Carlos Costa, ISEG, Universidade de Lisboa

Carlos Ferrás Sexto, Universidad de Santiago de Compostela

David Fonseca, La Salle, Universitat Ramon Llull

Ernest Redondo, Universidad Politécnica de Catalunya

Fernando Moreira, Universidade Portucalense

Fernando Ramos, Universidade de Aveiro

Francisco Restivo, Universidade Católica Portuguesa

Gonçalo Paiva Dias, Universidade de Aveiro

Gonzalo Cuevas Agustin, Universidad Politécnica de Madrid

Guilhermina Maria Lobato de Miranda, IE, Universidade de Lisboa

João Costa, Universidade de Coimbra

João Manuel R.S. Tavares, FEUP, Universidade do Porto

José Antonio Calvo-Manzano Villalón, Universidad Politécnica de Madrid

José Borbinha, IST, Universidade de Lisboa

José Machado, Universidade do Minho

José Martins, Universidade de Trás-os-Montes e Alto Douro

Juan de Dios Murillo, Universidad Nacional de Costa Rica

Leandro Rodríguez Linares, Universidade de Vigo

Luciano Boquete, Universidad de Alcalá

Luis Camarinha Matos, Universidade Nova de Lisboa

Luis Macedo, Universidade de Coimbra

Luís Paulo Reis, FEUP, Universidade do Porto

Marco Painho, NOVA Information Management School

Mareca López María Pilar, Universidad Politécnica de Madrid

María José Lado Touriño, Universidade de Vigo

Mário Piattini, Universidad de Castilla-La Mancha

Mário Rela, Universidade de Coimbra

Martin Llamas-Nistal, Universidad de Vigo

Miguel Ramón González Castro, Ence, Energía y Celulosa

Nelson Rocha, Universidade de Aveiro

Paulo Pinto, Univesidade Nova de Lisboa

Óscar Mealha, Universidade de Aveiro

Ramiro Gonçalves, Universidade de Trás-os-Montes e Alto Douro

Vitor Santos, NOVA Information Management School

Yolanda García Vázquez, Universidad de Santiago de Compostela




Doctoral Symposium webpage: https://goo.gl/JTrcLB


K

Re: [PATCH v3 23/23] drm/qxl: add overflow checks to qxl_mode_dumb_create()

2019-01-18 Thread Ville Syrjälä
On Fri, Jan 18, 2019 at 04:49:44PM +0100, Daniel Vetter wrote:
> On Fri, Jan 18, 2019 at 01:20:20PM +0100, Gerd Hoffmann wrote:
> > Signed-off-by: Gerd Hoffmann 
> 
> We already do all reasonable overflow checks in drm_mode_create_dumb(). If
> you don't trust them I think would be better time spent typing an igt to
> test this than adding redundant check in all drivers.
> 
> You're also missing one check for bpp underflows :-)

BTW I just noticed that we don't seem to validating 
create_dumb->flags at all. Someone should probably add some
checks for that, or mark it as deprecated in case we already
lost the battle with userspace stack garbage.

> -Daniel
> 
> > ---
> >  drivers/gpu/drm/qxl/qxl_dumb.c | 10 ++
> >  1 file changed, 6 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/qxl/qxl_dumb.c b/drivers/gpu/drm/qxl/qxl_dumb.c
> > index 272d19b677..bed6d06ee4 100644
> > --- a/drivers/gpu/drm/qxl/qxl_dumb.c
> > +++ b/drivers/gpu/drm/qxl/qxl_dumb.c
> > @@ -37,11 +37,13 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
> > uint32_t handle;
> > int r;
> > struct qxl_surface surf;
> > -   uint32_t pitch, format;
> > +   uint32_t pitch, size, format;
> >  
> > -   pitch = args->width * ((args->bpp + 1) / 8);
> > -   args->size = pitch * args->height;
> > -   args->size = ALIGN(args->size, PAGE_SIZE);
> > +   if (check_mul_overflow(args->width, ((args->bpp + 1) / 8), &pitch))
> > +   return -EINVAL;
> > +   if (check_mul_overflow(pitch, args->height, &size))
> > +   return -EINVAL;
> > +   args->size = ALIGN(size, PAGE_SIZE);
> >  
> > switch (args->bpp) {
> > case 16:
> > -- 
> > 2.9.3
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Ville Syrjälä
Intel
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 23/23] drm/qxl: add overflow checks to qxl_mode_dumb_create()

2019-01-18 Thread Daniel Vetter
On Fri, Jan 18, 2019 at 5:32 PM Ville Syrjälä
 wrote:
>
> On Fri, Jan 18, 2019 at 04:49:44PM +0100, Daniel Vetter wrote:
> > On Fri, Jan 18, 2019 at 01:20:20PM +0100, Gerd Hoffmann wrote:
> > > Signed-off-by: Gerd Hoffmann 
> >
> > We already do all reasonable overflow checks in drm_mode_create_dumb(). If
> > you don't trust them I think would be better time spent typing an igt to
> > test this than adding redundant check in all drivers.
> >
> > You're also missing one check for bpp underflows :-)
>
> BTW I just noticed that we don't seem to validating
> create_dumb->flags at all. Someone should probably add some
> checks for that, or mark it as deprecated in case we already
> lost the battle with userspace stack garbage.

Given that every kms client/compositor under the sun uses this (or
well, all the generic ones at least) I think we can safely assume to
have lost that battle :-/
-Daniel

>
> > -Daniel
> >
> > > ---
> > >  drivers/gpu/drm/qxl/qxl_dumb.c | 10 ++
> > >  1 file changed, 6 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/qxl/qxl_dumb.c 
> > > b/drivers/gpu/drm/qxl/qxl_dumb.c
> > > index 272d19b677..bed6d06ee4 100644
> > > --- a/drivers/gpu/drm/qxl/qxl_dumb.c
> > > +++ b/drivers/gpu/drm/qxl/qxl_dumb.c
> > > @@ -37,11 +37,13 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
> > > uint32_t handle;
> > > int r;
> > > struct qxl_surface surf;
> > > -   uint32_t pitch, format;
> > > +   uint32_t pitch, size, format;
> > >
> > > -   pitch = args->width * ((args->bpp + 1) / 8);
> > > -   args->size = pitch * args->height;
> > > -   args->size = ALIGN(args->size, PAGE_SIZE);
> > > +   if (check_mul_overflow(args->width, ((args->bpp + 1) / 8), &pitch))
> > > +   return -EINVAL;
> > > +   if (check_mul_overflow(pitch, args->height, &size))
> > > +   return -EINVAL;
> > > +   args->size = ALIGN(size, PAGE_SIZE);
> > >
> > > switch (args->bpp) {
> > > case 16:
> > > --
> > > 2.9.3
> > >
> >
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
> > ___
> > dri-devel mailing list
> > dri-de...@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> --
> Ville Syrjälä
> Intel
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH v7 0/7] Add virtio-iommu driver

2019-01-18 Thread Michael S. Tsirkin


On Tue, Jan 15, 2019 at 12:19:52PM +, Jean-Philippe Brucker wrote:
> Implement the virtio-iommu driver, following specification v0.9 [1].
> 
> This is a simple rebase onto Linux v5.0-rc2. We now use the
> dev_iommu_fwspec_get() helper introduced in v5.0 instead of accessing
> dev->iommu_fwspec, but there aren't any functional change from v6 [2].
> 
> Our current goal for virtio-iommu is to get a paravirtual IOMMU working
> on Arm, and enable device assignment to guest userspace. In this
> use-case the mappings are static, and don't require optimal performance,
> so this series tries to keep things simple. However there is plenty more
> to do for features and optimizations, and having this base in v5.1 would
> be good. Given that most of the changes are to drivers/iommu, I believe
> the driver and future changes should go via the IOMMU tree.
> 
> You can find Linux driver and kvmtool device on v0.9.2 branches [3],
> module and x86 support on virtio-iommu/devel. Also tested with Eric's
> QEMU device [4]. Please note that the series depends on Robin's
> probe-deferral fix [5], which will hopefully land in v5.0.
> 
> [1] Virtio-iommu specification v0.9, sources and pdf
> git://linux-arm.org/virtio-iommu.git virtio-iommu/v0.9
> http://jpbrucker.net/virtio-iommu/spec/v0.9/virtio-iommu-v0.9.pdf
> 
> [2] [PATCH v6 0/7] Add virtio-iommu driver
> 
> https://lists.linuxfoundation.org/pipermail/iommu/2018-December/032127.html
> 
> [3] git://linux-arm.org/linux-jpb.git virtio-iommu/v0.9.2
> git://linux-arm.org/kvmtool-jpb.git virtio-iommu/v0.9.2
> 
> [4] [RFC v9 00/17] VIRTIO-IOMMU device
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg575578.html
> 
> [5] [PATCH] iommu/of: Fix probe-deferral
> https://www.spinics.net/lists/arm-kernel/msg698371.html

Thanks for the work!
So really my only issue with this is that there's no
way for the IOMMU to describe the devices that it
covers.

As a result that is then done in a platform-specific way.

And this means that for example it does not solve the problem that e.g.
some power people have in that their platform simply does not have a way
to specify which devices are covered by the IOMMU.

Solving that problem would make me much more excited about
this device.

On the other hand I can see that while there have been some
developments most of the code has been stable for quite a while now.

So what I am trying to do right about now, is making a small module that
loads early and pokes at the IOMMU sufficiently to get the data about
which devices use the IOMMU out of it using standard virtio config
space.  IIUC it's claimed to be impossible without messy changes to the
boot sequence.

If I succeed at least on some platforms I'll ask that this design is
worked into this device, minimizing info that goes through DT/ACPI.  If
I see I can't make it in time to meet the next merge window, I plan
merging the existing patches using DT (barring surprises).

As I only have a very small amount of time to spend on this attempt, If
someone else wants to try doing that in parallel, that would be great!


> Jean-Philippe Brucker (7):
>   dt-bindings: virtio-mmio: Add IOMMU description
>   dt-bindings: virtio: Add virtio-pci-iommu node
>   of: Allow the iommu-map property to omit untranslated devices
>   PCI: OF: Initialize dev->fwnode appropriately
>   iommu: Add virtio-iommu driver
>   iommu/virtio: Add probe request
>   iommu/virtio: Add event queue
> 
>  .../devicetree/bindings/virtio/iommu.txt  |   66 +
>  .../devicetree/bindings/virtio/mmio.txt   |   30 +
>  MAINTAINERS   |7 +
>  drivers/iommu/Kconfig |   11 +
>  drivers/iommu/Makefile|1 +
>  drivers/iommu/virtio-iommu.c  | 1158 +
>  drivers/of/base.c |   10 +-
>  drivers/pci/of.c  |7 +
>  include/uapi/linux/virtio_ids.h   |1 +
>  include/uapi/linux/virtio_iommu.h |  161 +++
>  10 files changed, 1449 insertions(+), 3 deletions(-)
>  create mode 100644 Documentation/devicetree/bindings/virtio/iommu.txt
>  create mode 100644 drivers/iommu/virtio-iommu.c
>  create mode 100644 include/uapi/linux/virtio_iommu.h
> 
> -- 
> 2.19.1
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 23/23] drm/qxl: add overflow checks to qxl_mode_dumb_create()

2019-01-18 Thread Daniel Vetter
On Fri, Jan 18, 2019 at 01:20:20PM +0100, Gerd Hoffmann wrote:
> Signed-off-by: Gerd Hoffmann 

We already do all reasonable overflow checks in drm_mode_create_dumb(). If
you don't trust them I think would be better time spent typing an igt to
test this than adding redundant check in all drivers.

You're also missing one check for bpp underflows :-)
-Daniel

> ---
>  drivers/gpu/drm/qxl/qxl_dumb.c | 10 ++
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/qxl/qxl_dumb.c b/drivers/gpu/drm/qxl/qxl_dumb.c
> index 272d19b677..bed6d06ee4 100644
> --- a/drivers/gpu/drm/qxl/qxl_dumb.c
> +++ b/drivers/gpu/drm/qxl/qxl_dumb.c
> @@ -37,11 +37,13 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
>   uint32_t handle;
>   int r;
>   struct qxl_surface surf;
> - uint32_t pitch, format;
> + uint32_t pitch, size, format;
>  
> - pitch = args->width * ((args->bpp + 1) / 8);
> - args->size = pitch * args->height;
> - args->size = ALIGN(args->size, PAGE_SIZE);
> + if (check_mul_overflow(args->width, ((args->bpp + 1) / 8), &pitch))
> + return -EINVAL;
> + if (check_mul_overflow(pitch, args->height, &size))
> + return -EINVAL;
> + args->size = ALIGN(size, PAGE_SIZE);
>  
>   switch (args->bpp) {
>   case 16:
> -- 
> 2.9.3
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2] virtio_net: bulk free tx skbs

2019-01-18 Thread Jason Wang


On 2019/1/18 下午12:20, Michael S. Tsirkin wrote:

Use napi_consume_skb() to get bulk free.  Note that napi_consume_skb is
safe to call in a non-napi context as long as the napi_budget flag is
correct.

Signed-off-by: Michael S. Tsirkin 
---

Changes from v1:
rebase on master.

lightly tested on developer's box.

  drivers/net/virtio_net.c | 12 ++--
  1 file changed, 6 insertions(+), 6 deletions(-)



Acked-by: Jason Wang 




diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 023725086046..8fadd8eaf601 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1330,7 +1330,7 @@ static int virtnet_receive(struct receive_queue *rq, int 
budget,
return stats.packets;
  }
  
-static void free_old_xmit_skbs(struct send_queue *sq)

+static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi)
  {
struct sk_buff *skb;
unsigned int len;
@@ -1343,7 +1343,7 @@ static void free_old_xmit_skbs(struct send_queue *sq)
bytes += skb->len;
packets++;
  
-		dev_consume_skb_any(skb);

+   napi_consume_skb(skb, in_napi);
}
  
  	/* Avoid overhead when no packets have been processed

@@ -1369,7 +1369,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
return;
  
  	if (__netif_tx_trylock(txq)) {

-   free_old_xmit_skbs(sq);
+   free_old_xmit_skbs(sq, true);
__netif_tx_unlock(txq);
}
  
@@ -1445,7 +1445,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)

struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq));
  
  	__netif_tx_lock(txq, raw_smp_processor_id());

-   free_old_xmit_skbs(sq);
+   free_old_xmit_skbs(sq, true);
__netif_tx_unlock(txq);
  
  	virtqueue_napi_complete(napi, sq->vq, 0);

@@ -1514,7 +1514,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct 
net_device *dev)
bool use_napi = sq->napi.weight;
  
  	/* Free up any pending old buffers before queueing new ones. */

-   free_old_xmit_skbs(sq);
+   free_old_xmit_skbs(sq, false);
  
  	if (use_napi && kick)

virtqueue_enable_cb_delayed(sq->vq);
@@ -1557,7 +1557,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct 
net_device *dev)
if (!use_napi &&
unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
/* More just got used, free them then recheck. */
-   free_old_xmit_skbs(sq);
+   free_old_xmit_skbs(sq, false);
if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
netif_start_subqueue(dev, qnum);
virtqueue_disable_cb(sq->vq);

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v3 18/23] drm/qxl: remove dead qxl fbdev emulation code

2019-01-18 Thread Gerd Hoffmann
Lovely diffstat, thanks to the new generic fbdev emulation.

 drm/qxl/Makefile   |2
 drm/qxl/qxl_draw.c |  232 
 drm/qxl/qxl_drv.h  |   21 ---
 drm/qxl/qxl_fb.c   |  300 -

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h  |  21 ---
 drivers/gpu/drm/qxl/qxl_draw.c | 232 ---
 drivers/gpu/drm/qxl/qxl_fb.c   | 300 -
 drivers/gpu/drm/qxl/Makefile   |   2 +-
 4 files changed, 1 insertion(+), 554 deletions(-)
 delete mode 100644 drivers/gpu/drm/qxl/qxl_fb.c

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 8c3af1cdbe..4a0331b3ff 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -220,8 +220,6 @@ struct qxl_device {
struct qxl_mman mman;
struct qxl_gem  gem;
 
-   struct drm_fb_helperfb_helper;
-
void *ram_physical;
 
struct qxl_ring *release_ring;
@@ -322,12 +320,6 @@ qxl_bo_physical_address(struct qxl_device *qdev, struct 
qxl_bo *bo,
return slot->high_bits | (bo->tbo.offset - slot->gpu_offset + offset);
 }
 
-/* qxl_fb.c */
-#define QXLFB_CONN_LIMIT 1
-
-int qxl_fbdev_init(struct qxl_device *qdev);
-void qxl_fbdev_fini(struct qxl_device *qdev);
-
 /* qxl_display.c */
 void qxl_display_read_client_monitors_config(struct qxl_device *qdev);
 int qxl_create_monitors_object(struct qxl_device *qdev);
@@ -432,9 +424,6 @@ int qxl_alloc_bo_reserved(struct qxl_device *qdev,
  struct qxl_bo **_bo);
 /* qxl drawing commands */
 
-void qxl_draw_opaque_fb(const struct qxl_fb_image *qxl_fb_image,
-   int stride /* filled in if 0 */);
-
 void qxl_draw_dirty_fb(struct qxl_device *qdev,
   struct drm_framebuffer *fb,
   struct qxl_bo *bo,
@@ -443,13 +432,6 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
   unsigned int num_clips, int inc,
   uint32_t dumb_shadow_offset);
 
-void qxl_draw_fill(struct qxl_draw_fill *qxl_draw_fill_rec);
-
-void qxl_draw_copyarea(struct qxl_device *qdev,
-  u32 width, u32 height,
-  u32 sx, u32 sy,
-  u32 dx, u32 dy);
-
 void qxl_release_free(struct qxl_device *qdev,
  struct qxl_release *release);
 
@@ -481,9 +463,6 @@ int qxl_gem_prime_mmap(struct drm_gem_object *obj,
 int qxl_irq_init(struct qxl_device *qdev);
 irqreturn_t qxl_irq_handler(int irq, void *arg);
 
-/* qxl_fb.c */
-bool qxl_fbdev_qobj_is_fb(struct qxl_device *qdev, struct qxl_bo *qobj);
-
 int qxl_debugfs_add_files(struct qxl_device *qdev,
  struct drm_info_list *files,
  unsigned int nfiles);
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 5313ad21c1..97c3f1a95a 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -109,152 +109,6 @@ make_drawable(struct qxl_device *qdev, int surface, 
uint8_t type,
return 0;
 }
 
-static int alloc_palette_object(struct qxl_device *qdev,
-   struct qxl_release *release,
-   struct qxl_bo **palette_bo)
-{
-   return qxl_alloc_bo_reserved(qdev, release,
-sizeof(struct qxl_palette) + 
sizeof(uint32_t) * 2,
-palette_bo);
-}
-
-static int qxl_palette_create_1bit(struct qxl_bo *palette_bo,
-  struct qxl_release *release,
-  const struct qxl_fb_image *qxl_fb_image)
-{
-   const struct fb_image *fb_image = &qxl_fb_image->fb_image;
-   uint32_t visual = qxl_fb_image->visual;
-   const uint32_t *pseudo_palette = qxl_fb_image->pseudo_palette;
-   struct qxl_palette *pal;
-   int ret;
-   uint32_t fgcolor, bgcolor;
-   static uint64_t unique; /* we make no attempt to actually set this
-* correctly globaly, since that would require
-* tracking all of our palettes. */
-   ret = qxl_bo_kmap(palette_bo, (void **)&pal);
-   if (ret)
-   return ret;
-   pal->num_ents = 2;
-   pal->unique = unique++;
-   if (visual == FB_VISUAL_TRUECOLOR || visual == FB_VISUAL_DIRECTCOLOR) {
-   /* NB: this is the only used branch currently. */
-   fgcolor = pseudo_palette[fb_image->fg_color];
-   bgcolor = pseudo_palette[fb_image->bg_color];
-   } else {
-   fgcolor = fb_image->fg_color;
-   bgcolor = fb_image->bg_color;
-   }
-   pal->ents[0] = bgcolor;
-   pal->ents[1] = fgcolor;
-   qxl_bo_kunmap(palette_bo);
-   return 0;
-}
-
-void qxl_draw_opaque_fb(const struct qxl_fb_image *qxl_fb_image,
- 

[PATCH v3 12/23] drm/qxl: track primary bo

2019-01-18 Thread Gerd Hoffmann
Track which bo is used as primary surface.  With that in place we don't
need the primary_created flag any more, we can just check the primary bo
pointer instead.

Also verify we don't already have a primary surface in
qxl_io_create_primary().

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h | 2 +-
 drivers/gpu/drm/qxl/qxl_cmd.c | 7 +--
 drivers/gpu/drm/qxl/qxl_display.c | 2 +-
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index cb767aaef6..150b1a4f66 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -229,7 +229,7 @@ struct qxl_device {
 
struct qxl_ram_header *ram_header;
 
-   unsigned int primary_created:1;
+   struct qxl_bo *primary_bo;
 
struct qxl_memslot main_slot;
struct qxl_memslot surfaces_slot;
diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
index bc13539249..8e64127259 100644
--- a/drivers/gpu/drm/qxl/qxl_cmd.c
+++ b/drivers/gpu/drm/qxl/qxl_cmd.c
@@ -374,13 +374,16 @@ void qxl_io_flush_surfaces(struct qxl_device *qdev)
 void qxl_io_destroy_primary(struct qxl_device *qdev)
 {
wait_for_io_cmd(qdev, 0, QXL_IO_DESTROY_PRIMARY_ASYNC);
-   qdev->primary_created = false;
+   qdev->primary_bo = NULL;
 }
 
 void qxl_io_create_primary(struct qxl_device *qdev, struct qxl_bo *bo)
 {
struct qxl_surface_create *create;
 
+   if (WARN_ON(qdev->primary_bo))
+   return;
+
DRM_DEBUG_DRIVER("qdev %p, ram_header %p\n", qdev, qdev->ram_header);
create = &qdev->ram_header->create_surface;
create->format = bo->surf.format;
@@ -399,7 +402,7 @@ void qxl_io_create_primary(struct qxl_device *qdev, struct 
qxl_bo *bo)
create->type = QXL_SURF_TYPE_PRIMARY;
 
wait_for_io_cmd(qdev, 0, QXL_IO_CREATE_PRIMARY_ASYNC);
-   qdev->primary_created = true;
+   qdev->primary_bo = bo;
 }
 
 void qxl_io_memslot_add(struct qxl_device *qdev, uint8_t id)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 21165ab514..d3215eac9b 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -302,7 +302,7 @@ static void qxl_crtc_update_monitors_config(struct drm_crtc 
*crtc,
struct qxl_head head;
int oldcount, i = qcrtc->index;
 
-   if (!qdev->primary_created) {
+   if (!qdev->primary_bo) {
DRM_DEBUG_KMS("no primary surface, skip (%s)\n", reason);
return;
}
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 23/23] drm/qxl: add overflow checks to qxl_mode_dumb_create()

2019-01-18 Thread Gerd Hoffmann
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_dumb.c | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_dumb.c b/drivers/gpu/drm/qxl/qxl_dumb.c
index 272d19b677..bed6d06ee4 100644
--- a/drivers/gpu/drm/qxl/qxl_dumb.c
+++ b/drivers/gpu/drm/qxl/qxl_dumb.c
@@ -37,11 +37,13 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
uint32_t handle;
int r;
struct qxl_surface surf;
-   uint32_t pitch, format;
+   uint32_t pitch, size, format;
 
-   pitch = args->width * ((args->bpp + 1) / 8);
-   args->size = pitch * args->height;
-   args->size = ALIGN(args->size, PAGE_SIZE);
+   if (check_mul_overflow(args->width, ((args->bpp + 1) / 8), &pitch))
+   return -EINVAL;
+   if (check_mul_overflow(pitch, args->height, &size))
+   return -EINVAL;
+   args->size = ALIGN(size, PAGE_SIZE);
 
switch (args->bpp) {
case 16:
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 17/23] drm/qxl: use generic fbdev emulation

2019-01-18 Thread Gerd Hoffmann
Switch qxl over to the new generic fbdev emulation.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 7 ---
 drivers/gpu/drm/qxl/qxl_drv.c | 2 ++
 2 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index ef832f98ab..9c751f01e3 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -1221,18 +1221,11 @@ int qxl_modeset_init(struct qxl_device *qdev)
qxl_display_read_client_monitors_config(qdev);
 
drm_mode_config_reset(&qdev->ddev);
-
-   /* primary surface must be created by this point, to allow
-* issuing command queue commands and having them read by
-* spice server. */
-   qxl_fbdev_init(qdev);
return 0;
 }
 
 void qxl_modeset_fini(struct qxl_device *qdev)
 {
-   qxl_fbdev_fini(qdev);
-
qxl_destroy_monitors_object(qdev);
drm_mode_config_cleanup(&qdev->ddev);
 }
diff --git a/drivers/gpu/drm/qxl/qxl_drv.c b/drivers/gpu/drm/qxl/qxl_drv.c
index 13c8a662f9..3fce7d16df 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.c
+++ b/drivers/gpu/drm/qxl/qxl_drv.c
@@ -93,6 +93,8 @@ qxl_pci_probe(struct pci_dev *pdev, const struct 
pci_device_id *ent)
if (ret)
goto modeset_cleanup;
 
+   drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, 0, "qxl");
+   drm_fbdev_generic_setup(&qdev->ddev, 32);
return 0;
 
 modeset_cleanup:
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 02/23] drm/qxl: drop unused qxl_fb_virtual_address

2019-01-18 Thread Gerd Hoffmann
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h | 7 ---
 1 file changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 38c5a8b1df..7eabf4a9ed 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -308,13 +308,6 @@ void qxl_ring_free(struct qxl_ring *ring);
 void qxl_ring_init_hdr(struct qxl_ring *ring);
 int qxl_check_idle(struct qxl_ring *ring);
 
-static inline void *
-qxl_fb_virtual_address(struct qxl_device *qdev, unsigned long physical)
-{
-   DRM_DEBUG_DRIVER("not implemented (%lu)\n", physical);
-   return 0;
-}
-
 static inline uint64_t
 qxl_bo_physical_address(struct qxl_device *qdev, struct qxl_bo *bo,
unsigned long offset)
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 07/23] drm/qxl: allow both PRIV and VRAM placement for QXL_GEM_DOMAIN_SURFACE

2019-01-18 Thread Gerd Hoffmann
qxl surfaces (used for framebuffers and gem objects) can live in both
VRAM and PRIV ttm domains.  Update placement setup to include both.
Put PRIV first in the list so it is preferred, so VRAM will have more
room for objects which must be allocated there.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_object.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 34eff8b21e..024c8dd317 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -60,8 +60,10 @@ void qxl_ttm_placement_from_domain(struct qxl_bo *qbo, u32 
domain, bool pinned)
qbo->placement.busy_placement = qbo->placements;
if (domain == QXL_GEM_DOMAIN_VRAM)
qbo->placements[c++].flags = TTM_PL_FLAG_CACHED | 
TTM_PL_FLAG_VRAM | pflag;
-   if (domain == QXL_GEM_DOMAIN_SURFACE)
+   if (domain == QXL_GEM_DOMAIN_SURFACE) {
qbo->placements[c++].flags = TTM_PL_FLAG_CACHED | 
TTM_PL_FLAG_PRIV | pflag;
+   qbo->placements[c++].flags = TTM_PL_FLAG_CACHED | 
TTM_PL_FLAG_VRAM | pflag;
+   }
if (domain == QXL_GEM_DOMAIN_CPU)
qbo->placements[c++].flags = TTM_PL_MASK_CACHING | 
TTM_PL_FLAG_SYSTEM | pflag;
if (!c)
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 01/23] drm/qxl: drop ttm_mem_reg arg from qxl_hw_surface_alloc()

2019-01-18 Thread Gerd Hoffmann
Not used, is always NULL.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h|  3 +--
 drivers/gpu/drm/qxl/qxl_cmd.c| 14 ++
 drivers/gpu/drm/qxl/qxl_object.c |  2 +-
 3 files changed, 4 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 13a0254b59..38c5a8b1df 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -497,8 +497,7 @@ int qxl_surface_id_alloc(struct qxl_device *qdev,
 void qxl_surface_id_dealloc(struct qxl_device *qdev,
uint32_t surface_id);
 int qxl_hw_surface_alloc(struct qxl_device *qdev,
-struct qxl_bo *surf,
-struct ttm_mem_reg *mem);
+struct qxl_bo *surf);
 int qxl_hw_surface_dealloc(struct qxl_device *qdev,
   struct qxl_bo *surf);
 
diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
index 2e100f6442..5ba831c78c 100644
--- a/drivers/gpu/drm/qxl/qxl_cmd.c
+++ b/drivers/gpu/drm/qxl/qxl_cmd.c
@@ -460,8 +460,7 @@ void qxl_surface_id_dealloc(struct qxl_device *qdev,
 }
 
 int qxl_hw_surface_alloc(struct qxl_device *qdev,
-struct qxl_bo *surf,
-struct ttm_mem_reg *new_mem)
+struct qxl_bo *surf)
 {
struct qxl_surface_cmd *cmd;
struct qxl_release *release;
@@ -487,16 +486,7 @@ int qxl_hw_surface_alloc(struct qxl_device *qdev,
cmd->u.surface_create.width = surf->surf.width;
cmd->u.surface_create.height = surf->surf.height;
cmd->u.surface_create.stride = surf->surf.stride;
-   if (new_mem) {
-   int slot_id = surf->type == QXL_GEM_DOMAIN_VRAM ? 
qdev->main_mem_slot : qdev->surfaces_mem_slot;
-   struct qxl_memslot *slot = &(qdev->mem_slots[slot_id]);
-
-   /* TODO - need to hold one of the locks to read tbo.offset */
-   cmd->u.surface_create.data = slot->high_bits;
-
-   cmd->u.surface_create.data |= (new_mem->start << PAGE_SHIFT) + 
surf->tbo.bdev->man[new_mem->mem_type].gpu_offset;
-   } else
-   cmd->u.surface_create.data = qxl_bo_physical_address(qdev, 
surf, 0);
+   cmd->u.surface_create.data = qxl_bo_physical_address(qdev, surf, 0);
cmd->surface_id = surf->surface_id;
qxl_release_unmap(qdev, release, &cmd->release_info);
 
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 91f3bbc73e..34eff8b21e 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -332,7 +332,7 @@ int qxl_bo_check_id(struct qxl_device *qdev, struct qxl_bo 
*bo)
if (ret)
return ret;
 
-   ret = qxl_hw_surface_alloc(qdev, bo, NULL);
+   ret = qxl_hw_surface_alloc(qdev, bo);
if (ret)
return ret;
}
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 08/23] drm/qxl: use QXL_GEM_DOMAIN_SURFACE for shadow bo.

2019-01-18 Thread Gerd Hoffmann
The shadow bo is used as qxl surface, so allocate it as
QXL_GEM_DOMAIN_SURFACE.  Should usually be allocated in
PRIV ttm domain then, so this reduces VRAM memory pressure.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 1f8fddcc34..86bfc19bea 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -758,7 +758,7 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
user_bo->shadow = old_bo->shadow;
} else {
qxl_bo_create(qdev, user_bo->gem_base.size,
- true, true, QXL_GEM_DOMAIN_VRAM, NULL,
+ true, true, QXL_GEM_DOMAIN_SURFACE, NULL,
  &user_bo->shadow);
}
}
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 21/23] drm/qxl: add qxl_add_mode helper function

2019-01-18 Thread Gerd Hoffmann
Add a helper function to add custom video modes to a connector.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 84 +++
 1 file changed, 49 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index fed2ea018d..926fcb49b2 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -212,15 +212,36 @@ static int qxl_check_framebuffer(struct qxl_device *qdev,
return qxl_check_mode(qdev, bo->surf.width, bo->surf.height);
 }
 
-static int qxl_add_monitors_config_modes(struct drm_connector *connector,
- unsigned *pwidth,
- unsigned *pheight)
+static int qxl_add_mode(struct drm_connector *connector,
+   unsigned int width,
+   unsigned int height,
+   bool preferred)
+{
+   struct drm_device *dev = connector->dev;
+   struct qxl_device *qdev = dev->dev_private;
+   struct drm_display_mode *mode = NULL;
+   int rc;
+
+   rc = qxl_check_mode(qdev, width, height);
+   if (rc != 0)
+   return 0;
+
+   mode = drm_cvt_mode(dev, width, height, 60, false, false, false);
+   if (preferred)
+   mode->type |= DRM_MODE_TYPE_PREFERRED;
+   mode->hdisplay = width;
+   mode->vdisplay = height;
+   drm_mode_set_name(mode);
+   drm_mode_probed_add(connector, mode);
+   return 1;
+}
+
+static int qxl_add_monitors_config_modes(struct drm_connector *connector)
 {
struct drm_device *dev = connector->dev;
struct qxl_device *qdev = dev->dev_private;
struct qxl_output *output = drm_connector_to_qxl_output(connector);
int h = output->index;
-   struct drm_display_mode *mode = NULL;
struct qxl_head *head;
 
if (!qdev->monitors_config)
@@ -235,19 +256,7 @@ static int qxl_add_monitors_config_modes(struct 
drm_connector *connector,
head = &qdev->client_monitors_config->heads[h];
DRM_DEBUG_KMS("head %d is %dx%d\n", h, head->width, head->height);
 
-   mode = drm_cvt_mode(dev, head->width, head->height, 60, false, false,
-   false);
-   mode->type |= DRM_MODE_TYPE_PREFERRED;
-   mode->hdisplay = head->width;
-   mode->vdisplay = head->height;
-   drm_mode_set_name(mode);
-   *pwidth = head->width;
-   *pheight = head->height;
-   drm_mode_probed_add(connector, mode);
-   /* remember the last custom size for mode validation */
-   qdev->monitors_config_width = mode->hdisplay;
-   qdev->monitors_config_height = mode->vdisplay;
-   return 1;
+   return qxl_add_mode(connector, head->width, head->height, true);
 }
 
 static struct mode_size {
@@ -273,22 +282,16 @@ static struct mode_size {
{1920, 1200}
 };
 
-static int qxl_add_common_modes(struct drm_connector *connector,
-unsigned int pwidth,
-unsigned int pheight)
+static int qxl_add_common_modes(struct drm_connector *connector)
 {
-   struct drm_device *dev = connector->dev;
-   struct drm_display_mode *mode = NULL;
-   int i;
+   int i, ret = 0;
 
-   for (i = 0; i < ARRAY_SIZE(common_modes); i++) {
-   mode = drm_cvt_mode(dev, common_modes[i].w, common_modes[i].h,
-   60, false, false, false);
-   if (common_modes[i].w == pwidth && common_modes[i].h == pheight)
-   mode->type |= DRM_MODE_TYPE_PREFERRED;
-   drm_mode_probed_add(connector, mode);
-   }
-   return i - 1;
+   for (i = 0; i < ARRAY_SIZE(common_modes); i++)
+   ret += qxl_add_mode(connector,
+   common_modes[i].w,
+   common_modes[i].h,
+   false);
+   return ret;
 }
 
 static void qxl_send_monitors_config(struct qxl_device *qdev)
@@ -991,14 +994,25 @@ static int qdev_crtc_init(struct drm_device *dev, int 
crtc_id)
 
 static int qxl_conn_get_modes(struct drm_connector *connector)
 {
+   struct drm_device *dev = connector->dev;
+   struct qxl_device *qdev = dev->dev_private;
+   struct qxl_output *output = drm_connector_to_qxl_output(connector);
unsigned int pwidth = 1024;
unsigned int pheight = 768;
int ret = 0;
 
-   ret = qxl_add_monitors_config_modes(connector, &pwidth, &pheight);
-   if (ret < 0)
-   return ret;
-   ret += qxl_add_common_modes(connector, pwidth, pheight);
+   if (qdev->client_monitors_config) {
+   struct qxl_head *head;
+   head = &qdev->client_monitors_config->heads[output->index];
+   if (head->width)
+   pwidth = head->width;
+   if (head->height)
+ 

[PATCH v3 16/23] drm/qxl: implement prime kmap/kunmap

2019-01-18 Thread Gerd Hoffmann
Generic fbdev emulation needs this.  Also: We must keep track of the
number of mappings now, so we don't unmap early in case two users want a
kmap of the same bo.  Add a sanity check to destroy callback to make
sure kmap/kunmap is balanced.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h|  1 +
 drivers/gpu/drm/qxl/qxl_object.c |  6 ++
 drivers/gpu/drm/qxl/qxl_prime.c  | 17 +
 3 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 43c6df9cf9..8c3af1cdbe 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -84,6 +84,7 @@ struct qxl_bo {
struct ttm_bo_kmap_obj  kmap;
unsigned int pin_count;
void*kptr;
+   unsigned intmap_count;
int type;
 
/* Constant after initialization */
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 024c8dd317..4928fa6029 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -36,6 +36,7 @@ static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
qdev = (struct qxl_device *)bo->gem_base.dev->dev_private;
 
qxl_surface_evict(qdev, bo, false);
+   WARN_ON_ONCE(bo->map_count > 0);
mutex_lock(&qdev->gem.mutex);
list_del_init(&bo->list);
mutex_unlock(&qdev->gem.mutex);
@@ -131,6 +132,7 @@ int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
if (bo->kptr) {
if (ptr)
*ptr = bo->kptr;
+   bo->map_count++;
return 0;
}
r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
@@ -139,6 +141,7 @@ int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
if (ptr)
*ptr = bo->kptr;
+   bo->map_count = 1;
return 0;
 }
 
@@ -180,6 +183,9 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
 {
if (bo->kptr == NULL)
return;
+   bo->map_count--;
+   if (bo->map_count > 0)
+   return;
bo->kptr = NULL;
ttm_bo_kunmap(&bo->kmap);
 }
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index a55dece118..708378844c 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -22,7 +22,7 @@
  * Authors: Andreas Pokorny
  */
 
-#include "qxl_drv.h"
+#include "qxl_object.h"
 
 /* Empty Implementations as there should not be any other driver for a virtual
  * device that might share buffers with qxl */
@@ -54,13 +54,22 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
 
 void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
 {
-   WARN_ONCE(1, "not implemented");
-   return ERR_PTR(-ENOSYS);
+   struct qxl_bo *bo = gem_to_qxl_bo(obj);
+   void *ptr;
+   int ret;
+
+   ret = qxl_bo_kmap(bo, &ptr);
+   if (ret < 0)
+   return ERR_PTR(ret);
+
+   return ptr;
 }
 
 void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
 {
-   WARN_ONCE(1, "not implemented");
+   struct qxl_bo *bo = gem_to_qxl_bo(obj);
+
+   qxl_bo_kunmap(bo);
 }
 
 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 20/23] drm/qxl: add mode/framebuffer check functions

2019-01-18 Thread Gerd Hoffmann
Add a helper functions to check video modes.  Also add a helper to check
framebuffer buffer objects, using the former for consistency.  That way
we should not fail in qxl_primary_atomic_check() because video modes
which are too big will not be added to the mode list in the first place.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 44 +++
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 9c751f01e3..fed2ea018d 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -190,6 +190,28 @@ void qxl_display_read_client_monitors_config(struct 
qxl_device *qdev)
}
 }
 
+static int qxl_check_mode(struct qxl_device *qdev,
+ unsigned int width,
+ unsigned int height)
+{
+   unsigned int stride;
+   unsigned int size;
+
+   if (check_mul_overflow(width, 4u, &stride))
+   return -EINVAL;
+   if (check_mul_overflow(stride, height, &size))
+   return -EINVAL;
+   if (size > qdev->vram_size)
+   return -ENOMEM;
+   return 0;
+}
+
+static int qxl_check_framebuffer(struct qxl_device *qdev,
+struct qxl_bo *bo)
+{
+   return qxl_check_mode(qdev, bo->surf.width, bo->surf.height);
+}
+
 static int qxl_add_monitors_config_modes(struct drm_connector *connector,
  unsigned *pwidth,
  unsigned *pheight)
@@ -469,12 +491,7 @@ static int qxl_primary_atomic_check(struct drm_plane 
*plane,
 
bo = gem_to_qxl_bo(state->fb->obj[0]);
 
-   if (bo->surf.stride * bo->surf.height > qdev->vram_size) {
-   DRM_ERROR("Mode doesn't fit in vram size (vgamem)");
-   return -EINVAL;
-   }
-
-   return 0;
+   return qxl_check_framebuffer(qdev, bo);
 }
 
 static int qxl_primary_apply_cursor(struct drm_plane *plane)
@@ -990,20 +1007,11 @@ static enum drm_mode_status qxl_conn_mode_valid(struct 
drm_connector *connector,
 {
struct drm_device *ddev = connector->dev;
struct qxl_device *qdev = ddev->dev_private;
-   int i;
 
-   /* TODO: is this called for user defined modes? (xrandr --add-mode)
-* TODO: check that the mode fits in the framebuffer */
+   if (qxl_check_mode(qdev, mode->hdisplay, mode->vdisplay) != 0)
+   return MODE_BAD;
 
-   if (qdev->monitors_config_width == mode->hdisplay &&
-   qdev->monitors_config_height == mode->vdisplay)
-   return MODE_OK;
-
-   for (i = 0; i < ARRAY_SIZE(common_modes); i++) {
-   if (common_modes[i].w == mode->hdisplay && common_modes[i].h == 
mode->vdisplay)
-   return MODE_OK;
-   }
-   return MODE_BAD;
+   return MODE_OK;
 }
 
 static struct drm_encoder *qxl_best_encoder(struct drm_connector *connector)
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 22/23] drm/qxl: use kernel mode db

2019-01-18 Thread Gerd Hoffmann
Add all standard modes from the kernel's video mode data base.
Keep a few non-standard modes in the qxl mode list.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 27 +++
 1 file changed, 7 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 926fcb49b2..df768b0c83 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -262,34 +262,20 @@ static int qxl_add_monitors_config_modes(struct 
drm_connector *connector)
 static struct mode_size {
int w;
int h;
-} common_modes[] = {
-   { 640,  480},
+} extra_modes[] = {
{ 720,  480},
-   { 800,  600},
-   { 848,  480},
-   {1024,  768},
{1152,  768},
-   {1280,  720},
-   {1280,  800},
{1280,  854},
-   {1280,  960},
-   {1280, 1024},
-   {1440,  900},
-   {1400, 1050},
-   {1680, 1050},
-   {1600, 1200},
-   {1920, 1080},
-   {1920, 1200}
 };
 
-static int qxl_add_common_modes(struct drm_connector *connector)
+static int qxl_add_extra_modes(struct drm_connector *connector)
 {
int i, ret = 0;
 
-   for (i = 0; i < ARRAY_SIZE(common_modes); i++)
+   for (i = 0; i < ARRAY_SIZE(extra_modes); i++)
ret += qxl_add_mode(connector,
-   common_modes[i].w,
-   common_modes[i].h,
+   extra_modes[i].w,
+   extra_modes[i].h,
false);
return ret;
 }
@@ -1010,7 +996,8 @@ static int qxl_conn_get_modes(struct drm_connector 
*connector)
pheight = head->height;
}
 
-   ret += qxl_add_common_modes(connector);
+   ret += drm_add_modes_noedid(connector, 8192, 8192);
+   ret += qxl_add_extra_modes(connector);
ret += qxl_add_monitors_config_modes(connector);
drm_set_preferred_mode(connector, pwidth, pheight);
return ret;
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 14/23] drm/qxl: cover all crtcs in shadow bo.

2019-01-18 Thread Gerd Hoffmann
The qxl device supports only a single active framebuffer ("primary
surface" in spice terminology).  In multihead configurations are handled
by defining rectangles within the primary surface for each head/crtc.

Userspace which uses the qxl ioctl interface (xorg qxl driver) is aware
of this limitation and will setup framebuffers and crtcs accordingly.

Userspace which uses dumb framebuffers (xorg modesetting driver,
wayland) is not aware of this limitation and tries to use two
framebuffers (one for each crtc) instead.

The qxl kms driver already has the dumb bo separated from the primary
surface, by using a (shared) shadow bo as primary surface.  This is
needed to support pageflips without having to re-create the primary
surface.  The qxl driver will blit from the dumb bo to the shadow bo
instead.

So we can extend the shadow logic:  Maintain a global shadow bo (aka
primary surface), make it big enough that dumb bo's for all crtcs fit in
side-by-side.  Adjust the pageflip blits to place the heads next to each
other in the shadow.

With this patch in place multihead qxl works with wayland.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h |   5 +-
 drivers/gpu/drm/qxl/qxl_display.c | 119 +-
 drivers/gpu/drm/qxl/qxl_draw.c|   9 ++-
 3 files changed, 104 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 150b1a4f66..43c6df9cf9 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -230,6 +230,8 @@ struct qxl_device {
struct qxl_ram_header *ram_header;
 
struct qxl_bo *primary_bo;
+   struct qxl_bo *dumb_shadow_bo;
+   struct qxl_head *dumb_heads;
 
struct qxl_memslot main_slot;
struct qxl_memslot surfaces_slot;
@@ -437,7 +439,8 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
   struct qxl_bo *bo,
   unsigned int flags, unsigned int color,
   struct drm_clip_rect *clips,
-  unsigned int num_clips, int inc);
+  unsigned int num_clips, int inc,
+  uint32_t dumb_shadow_offset);
 
 void qxl_draw_fill(struct qxl_draw_fill *qxl_draw_fill_rec);
 
diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index ff13bc6a4a..d9de43e5fd 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -323,6 +323,8 @@ static void qxl_crtc_update_monitors_config(struct drm_crtc 
*crtc,
head.y = crtc->y;
if (qdev->monitors_config->count < i + 1)
qdev->monitors_config->count = i + 1;
+   if (qdev->primary_bo == qdev->dumb_shadow_bo)
+   head.x += qdev->dumb_heads[i].x;
} else if (i > 0) {
head.width = 0;
head.height = 0;
@@ -426,7 +428,7 @@ static int qxl_framebuffer_surface_dirty(struct 
drm_framebuffer *fb,
}
 
qxl_draw_dirty_fb(qdev, fb, qobj, flags, color,
- clips, num_clips, inc);
+ clips, num_clips, inc, 0);
 
drm_modeset_unlock_all(fb->dev);
 
@@ -535,6 +537,7 @@ static void qxl_primary_atomic_update(struct drm_plane 
*plane,
.x2 = plane->state->fb->width,
.y2 = plane->state->fb->height
};
+   uint32_t dumb_shadow_offset = 0;
 
if (old_state->fb) {
bo_old = gem_to_qxl_bo(old_state->fb->obj[0]);
@@ -551,7 +554,12 @@ static void qxl_primary_atomic_update(struct drm_plane 
*plane,
qxl_primary_apply_cursor(plane);
}
 
-   qxl_draw_dirty_fb(qdev, plane->state->fb, bo, 0, 0, &norect, 1, 1);
+   if (bo->is_dumb)
+   dumb_shadow_offset =
+   qdev->dumb_heads[plane->state->crtc->index].x;
+
+   qxl_draw_dirty_fb(qdev, plane->state->fb, bo, 0, 0, &norect, 1, 1,
+ dumb_shadow_offset);
 }
 
 static void qxl_primary_atomic_disable(struct drm_plane *plane,
@@ -707,12 +715,68 @@ static void qxl_cursor_atomic_disable(struct drm_plane 
*plane,
qxl_release_fence_buffer_objects(release);
 }
 
+static void qxl_update_dumb_head(struct qxl_device *qdev,
+int index, struct qxl_bo *bo)
+{
+   uint32_t width, height;
+
+   if (index >= qdev->monitors_config->max_allowed)
+   return;
+
+   if (bo && bo->is_dumb) {
+   width = bo->surf.width;
+   height = bo->surf.height;
+   } else {
+   width = 0;
+   height = 0;
+   }
+
+   if (qdev->dumb_heads[index].width == width &&
+   qdev->dumb_heads[index].height == height)
+   return;
+
+   DRM_DEBUG("#%d: %dx%d -> %dx%d\n", index,
+ qdev->dumb_heads[index].width,
+ qdev->dumb_heads[index].height,
+ width, he

[PATCH v3 19/23] drm/qxl: implement qxl_gem_prime_(un)pin

2019-01-18 Thread Gerd Hoffmann
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_prime.c | 10 +++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 708378844c..22e1faf047 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -22,6 +22,7 @@
  * Authors: Andreas Pokorny
  */
 
+#include "qxl_drv.h"
 #include "qxl_object.h"
 
 /* Empty Implementations as there should not be any other driver for a virtual
@@ -29,13 +30,16 @@
 
 int qxl_gem_prime_pin(struct drm_gem_object *obj)
 {
-   WARN_ONCE(1, "not implemented");
-   return -ENOSYS;
+   struct qxl_bo *bo = gem_to_qxl_bo(obj);
+
+   return qxl_bo_pin(bo);
 }
 
 void qxl_gem_prime_unpin(struct drm_gem_object *obj)
 {
-   WARN_ONCE(1, "not implemented");
+   struct qxl_bo *bo = gem_to_qxl_bo(obj);
+
+   qxl_bo_unpin(bo);
 }
 
 struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj)
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 15/23] drm/qxl: use qxl_num_crtc directly

2019-01-18 Thread Gerd Hoffmann
qdev->monitors_config->max_allowed is effectively set by the
qxl.num_heads module parameter, stored in the qxl_num_crtc variable.
Lets get rid of the indirection and use the variable qxl_num_crtc
directly.  The kernel doesn't need to dereference pointers each time it
needs the value, and when reading the code you don't have to trace where
and why qdev->monitors_config->max_allowed is set.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 25 +++--
 1 file changed, 11 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index d9de43e5fd..ef832f98ab 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -80,10 +80,10 @@ static int 
qxl_display_copy_rom_client_monitors_config(struct qxl_device *qdev)
DRM_DEBUG_KMS("no client monitors configured\n");
return status;
}
-   if (num_monitors > qdev->monitors_config->max_allowed) {
+   if (num_monitors > qxl_num_crtc) {
DRM_DEBUG_KMS("client monitors list will be truncated: %d < 
%d\n",
- qdev->monitors_config->max_allowed, num_monitors);
-   num_monitors = qdev->monitors_config->max_allowed;
+ qxl_num_crtc, num_monitors);
+   num_monitors = qxl_num_crtc;
} else {
num_monitors = qdev->rom->client_monitors_config.count;
}
@@ -96,8 +96,7 @@ static int qxl_display_copy_rom_client_monitors_config(struct 
qxl_device *qdev)
return status;
}
/* we copy max from the client but it isn't used */
-   qdev->client_monitors_config->max_allowed =
-   qdev->monitors_config->max_allowed;
+   qdev->client_monitors_config->max_allowed = qxl_num_crtc;
for (i = 0 ; i < qdev->client_monitors_config->count ; ++i) {
struct qxl_urect *c_rect =
&qdev->rom->client_monitors_config.heads[i];
@@ -204,7 +203,7 @@ static int qxl_add_monitors_config_modes(struct 
drm_connector *connector,
 
if (!qdev->monitors_config)
return 0;
-   if (h >= qdev->monitors_config->max_allowed)
+   if (h >= qxl_num_crtc)
return 0;
if (!qdev->client_monitors_config)
return 0;
@@ -307,8 +306,7 @@ static void qxl_crtc_update_monitors_config(struct drm_crtc 
*crtc,
return;
}
 
-   if (!qdev->monitors_config ||
-   qdev->monitors_config->max_allowed <= i)
+   if (!qdev->monitors_config || qxl_num_crtc <= i)
return;
 
head.id = i;
@@ -350,9 +348,10 @@ static void qxl_crtc_update_monitors_config(struct 
drm_crtc *crtc,
if (oldcount != qdev->monitors_config->count)
DRM_DEBUG_KMS("active heads %d -> %d (%d total)\n",
  oldcount, qdev->monitors_config->count,
- qdev->monitors_config->max_allowed);
+ qxl_num_crtc);
 
qdev->monitors_config->heads[i] = head;
+   qdev->monitors_config->max_allowed = qxl_num_crtc;
qxl_send_monitors_config(qdev);
 }
 
@@ -1146,9 +1145,8 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 {
int ret;
struct drm_gem_object *gobj;
-   int max_allowed = qxl_num_crtc;
int monitors_config_size = sizeof(struct qxl_monitors_config) +
-   max_allowed * sizeof(struct qxl_head);
+   qxl_num_crtc * sizeof(struct qxl_head);
 
ret = qxl_gem_object_create(qdev, monitors_config_size, 0,
QXL_GEM_DOMAIN_VRAM,
@@ -1170,9 +1168,8 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
qxl_bo_physical_address(qdev, qdev->monitors_config_bo, 0);
 
memset(qdev->monitors_config, 0, monitors_config_size);
-   qdev->monitors_config->max_allowed = max_allowed;
-
-   qdev->dumb_heads = kcalloc(max_allowed, sizeof(qdev->dumb_heads[0]), 
GFP_KERNEL);
+   qdev->dumb_heads = kcalloc(qxl_num_crtc, sizeof(qdev->dumb_heads[0]),
+  GFP_KERNEL);
return 0;
 }
 
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 13/23] drm/qxl: use shadow bo directly

2019-01-18 Thread Gerd Hoffmann
Pass the shadow bo to qxl_io_create_primary() instead of expecting
qxl_io_create_primary to check bo->shadow.  Set is_primary flag on the
shadow bo.  Move the is_primary tracking into qxl_io_create_primary()
and qxl_io_destroy_primary() functions.

That simplifies primary surface tracking and the workflow in
qxl_primary_atomic_update().

Signed-off-by: Gerd Hoffmann 

qxl_io_create/destroy_primary: primary_bo tracking [fixup]
---
 drivers/gpu/drm/qxl/qxl_cmd.c | 10 +-
 drivers/gpu/drm/qxl/qxl_display.c | 33 +++--
 2 files changed, 16 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
index 8e64127259..0a2e51af12 100644
--- a/drivers/gpu/drm/qxl/qxl_cmd.c
+++ b/drivers/gpu/drm/qxl/qxl_cmd.c
@@ -374,6 +374,8 @@ void qxl_io_flush_surfaces(struct qxl_device *qdev)
 void qxl_io_destroy_primary(struct qxl_device *qdev)
 {
wait_for_io_cmd(qdev, 0, QXL_IO_DESTROY_PRIMARY_ASYNC);
+   qdev->primary_bo->is_primary = false;
+   drm_gem_object_put_unlocked(&qdev->primary_bo->gem_base);
qdev->primary_bo = NULL;
 }
 
@@ -390,11 +392,7 @@ void qxl_io_create_primary(struct qxl_device *qdev, struct 
qxl_bo *bo)
create->width = bo->surf.width;
create->height = bo->surf.height;
create->stride = bo->surf.stride;
-   if (bo->shadow) {
-   create->mem = qxl_bo_physical_address(qdev, bo->shadow, 0);
-   } else {
-   create->mem = qxl_bo_physical_address(qdev, bo, 0);
-   }
+   create->mem = qxl_bo_physical_address(qdev, bo, 0);
 
DRM_DEBUG_DRIVER("mem = %llx, from %p\n", create->mem, bo->kptr);
 
@@ -403,6 +401,8 @@ void qxl_io_create_primary(struct qxl_device *qdev, struct 
qxl_bo *bo)
 
wait_for_io_cmd(qdev, 0, QXL_IO_CREATE_PRIMARY_ASYNC);
qdev->primary_bo = bo;
+   qdev->primary_bo->is_primary = true;
+   drm_gem_object_get(&qdev->primary_bo->gem_base);
 }
 
 void qxl_io_memslot_add(struct qxl_device *qdev, uint8_t id)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index d3215eac9b..ff13bc6a4a 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -401,13 +401,15 @@ static int qxl_framebuffer_surface_dirty(struct 
drm_framebuffer *fb,
struct qxl_device *qdev = fb->dev->dev_private;
struct drm_clip_rect norect;
struct qxl_bo *qobj;
+   bool is_primary;
int inc = 1;
 
drm_modeset_lock_all(fb->dev);
 
qobj = gem_to_qxl_bo(fb->obj[0]);
/* if we aren't primary surface ignore this */
-   if (!qobj->is_primary) {
+   is_primary = qobj->shadow ? qobj->shadow->is_primary : qobj->is_primary;
+   if (!is_primary) {
drm_modeset_unlock_all(fb->dev);
return 0;
}
@@ -526,14 +528,13 @@ static void qxl_primary_atomic_update(struct drm_plane 
*plane,
 {
struct qxl_device *qdev = plane->dev->dev_private;
struct qxl_bo *bo = gem_to_qxl_bo(plane->state->fb->obj[0]);
-   struct qxl_bo *bo_old;
+   struct qxl_bo *bo_old, *primary;
struct drm_clip_rect norect = {
.x1 = 0,
.y1 = 0,
.x2 = plane->state->fb->width,
.y2 = plane->state->fb->height
};
-   bool same_shadow = false;
 
if (old_state->fb) {
bo_old = gem_to_qxl_bo(old_state->fb->obj[0]);
@@ -541,26 +542,13 @@ static void qxl_primary_atomic_update(struct drm_plane 
*plane,
bo_old = NULL;
}
 
-   if (bo == bo_old)
-   return;
+   primary = bo->shadow ? bo->shadow : bo;
 
-   if (bo_old && bo_old->shadow && bo->shadow &&
-   bo_old->shadow == bo->shadow) {
-   same_shadow = true;
-   }
-
-   if (bo_old && bo_old->is_primary) {
-   if (!same_shadow)
+   if (!primary->is_primary) {
+   if (qdev->primary_bo)
qxl_io_destroy_primary(qdev);
-   bo_old->is_primary = false;
-   }
-
-   if (!bo->is_primary) {
-   if (!same_shadow) {
-   qxl_io_create_primary(qdev, bo);
-   qxl_primary_apply_cursor(plane);
-   }
-   bo->is_primary = true;
+   qxl_io_create_primary(qdev, primary);
+   qxl_primary_apply_cursor(plane);
}
 
qxl_draw_dirty_fb(qdev, plane->state->fb, bo, 0, 0, &norect, 1, 1);
@@ -756,6 +744,7 @@ static int qxl_plane_prepare_fb(struct drm_plane *plane,
qxl_bo_create(qdev, user_bo->gem_base.size,
  true, true, QXL_GEM_DOMAIN_SURFACE, NULL,
  &user_bo->shadow);
+   user_bo->shadow->surf = user_bo->surf;
}
}
 
@@ -784,7 +773,7 @@ static void qxl_plane_cleanup_fb(struct drm_plane *plane,
use

[PATCH v3 10/23] drm/qxl: move qxl_primary_apply_cursor to correct place

2019-01-18 Thread Gerd Hoffmann
The cursor must be set again after creating the primary surface.
Also drop the error message.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 10 +++---
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 86bfc19bea..1b700ef503 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -533,7 +533,6 @@ static void qxl_primary_atomic_update(struct drm_plane 
*plane,
.x2 = plane->state->fb->width,
.y2 = plane->state->fb->height
};
-   int ret;
bool same_shadow = false;
 
if (old_state->fb) {
@@ -554,16 +553,13 @@ static void qxl_primary_atomic_update(struct drm_plane 
*plane,
if (!same_shadow)
qxl_io_destroy_primary(qdev);
bo_old->is_primary = false;
-
-   ret = qxl_primary_apply_cursor(plane);
-   if (ret)
-   DRM_ERROR(
-   "could not set cursor after creating primary");
}
 
if (!bo->is_primary) {
-   if (!same_shadow)
+   if (!same_shadow) {
qxl_io_create_primary(qdev, 0, bo);
+   qxl_primary_apply_cursor(plane);
+   }
bo->is_primary = true;
}
 
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 11/23] drm/qxl: drop unused offset parameter from qxl_io_create_primary()

2019-01-18 Thread Gerd Hoffmann
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h | 1 -
 drivers/gpu/drm/qxl/qxl_cmd.c | 7 +++
 drivers/gpu/drm/qxl/qxl_display.c | 2 +-
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 27e0a3fc08..cb767aaef6 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -385,7 +385,6 @@ void qxl_update_screen(struct qxl_device *qxl);
 /* qxl io operations (qxl_cmd.c) */
 
 void qxl_io_create_primary(struct qxl_device *qdev,
-  unsigned int offset,
   struct qxl_bo *bo);
 void qxl_io_destroy_primary(struct qxl_device *qdev);
 void qxl_io_memslot_add(struct qxl_device *qdev, uint8_t id);
diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
index 5ba831c78c..bc13539249 100644
--- a/drivers/gpu/drm/qxl/qxl_cmd.c
+++ b/drivers/gpu/drm/qxl/qxl_cmd.c
@@ -377,8 +377,7 @@ void qxl_io_destroy_primary(struct qxl_device *qdev)
qdev->primary_created = false;
 }
 
-void qxl_io_create_primary(struct qxl_device *qdev,
-  unsigned int offset, struct qxl_bo *bo)
+void qxl_io_create_primary(struct qxl_device *qdev, struct qxl_bo *bo)
 {
struct qxl_surface_create *create;
 
@@ -389,9 +388,9 @@ void qxl_io_create_primary(struct qxl_device *qdev,
create->height = bo->surf.height;
create->stride = bo->surf.stride;
if (bo->shadow) {
-   create->mem = qxl_bo_physical_address(qdev, bo->shadow, offset);
+   create->mem = qxl_bo_physical_address(qdev, bo->shadow, 0);
} else {
-   create->mem = qxl_bo_physical_address(qdev, bo, offset);
+   create->mem = qxl_bo_physical_address(qdev, bo, 0);
}
 
DRM_DEBUG_DRIVER("mem = %llx, from %p\n", create->mem, bo->kptr);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 1b700ef503..21165ab514 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -557,7 +557,7 @@ static void qxl_primary_atomic_update(struct drm_plane 
*plane,
 
if (!bo->is_primary) {
if (!same_shadow) {
-   qxl_io_create_primary(qdev, 0, bo);
+   qxl_io_create_primary(qdev, bo);
qxl_primary_apply_cursor(plane);
}
bo->is_primary = true;
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 06/23] drm/qxl: use separate offset spaces for the two slots / ttm memory types.

2019-01-18 Thread Gerd Hoffmann
Without that ttm offsets are not unique, they can refer to objects
in both VRAM and PRIV memory (aka main and surfaces slot).

One of those "why things didn't blow up without this" moments.
Probably offset conflicts are rare enough by pure luck.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h |  5 -
 drivers/gpu/drm/qxl/qxl_kms.c |  5 +++--
 drivers/gpu/drm/qxl/qxl_ttm.c | 10 +-
 3 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3ebe66abf2..27e0a3fc08 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -136,6 +136,7 @@ struct qxl_memslot {
uint64_tstart_phys_addr;
uint64_tsize;
uint64_thigh_bits;
+   uint64_tgpu_offset;
 };
 
 enum {
@@ -312,8 +313,10 @@ qxl_bo_physical_address(struct qxl_device *qdev, struct 
qxl_bo *bo,
(bo->tbo.mem.mem_type == TTM_PL_VRAM)
? &qdev->main_slot : &qdev->surfaces_slot;
 
+   WARN_ON_ONCE((bo->tbo.offset & slot->gpu_offset) != slot->gpu_offset);
+
/* TODO - need to hold one of the locks to read tbo.offset */
-   return slot->high_bits | (bo->tbo.offset + offset);
+   return slot->high_bits | (bo->tbo.offset - slot->gpu_offset + offset);
 }
 
 /* qxl_fb.c */
diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c
index 3c1753667d..82c764623f 100644
--- a/drivers/gpu/drm/qxl/qxl_kms.c
+++ b/drivers/gpu/drm/qxl/qxl_kms.c
@@ -83,10 +83,11 @@ static void setup_slot(struct qxl_device *qdev,
high_bits <<= (64 - (qdev->rom->slot_gen_bits + 
qdev->rom->slot_id_bits));
slot->high_bits = high_bits;
 
-   DRM_INFO("slot %d (%s): base 0x%08lx, size 0x%08lx\n",
+   DRM_INFO("slot %d (%s): base 0x%08lx, size 0x%08lx, gpu_offset 0x%lx\n",
 slot->index, slot->name,
 (unsigned long)slot->start_phys_addr,
-(unsigned long)slot->size);
+(unsigned long)slot->size,
+(unsigned long)slot->gpu_offset);
 }
 
 void qxl_reinit_memslots(struct qxl_device *qdev)
diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c b/drivers/gpu/drm/qxl/qxl_ttm.c
index 886f61e94f..36ea993aac 100644
--- a/drivers/gpu/drm/qxl/qxl_ttm.c
+++ b/drivers/gpu/drm/qxl/qxl_ttm.c
@@ -100,6 +100,11 @@ static int qxl_invalidate_caches(struct ttm_bo_device 
*bdev, uint32_t flags)
 static int qxl_init_mem_type(struct ttm_bo_device *bdev, uint32_t type,
 struct ttm_mem_type_manager *man)
 {
+   struct qxl_device *qdev = qxl_get_qdev(bdev);
+   unsigned int gpu_offset_shift =
+   64 - (qdev->rom->slot_gen_bits + qdev->rom->slot_id_bits + 8);
+   struct qxl_memslot *slot;
+
switch (type) {
case TTM_PL_SYSTEM:
/* System memory */
@@ -110,8 +115,11 @@ static int qxl_init_mem_type(struct ttm_bo_device *bdev, 
uint32_t type,
case TTM_PL_VRAM:
case TTM_PL_PRIV:
/* "On-card" video ram */
+   slot = (type == TTM_PL_VRAM) ?
+   &qdev->main_slot : &qdev->surfaces_slot;
+   slot->gpu_offset = (uint64_t)type << gpu_offset_shift;
man->func = &ttm_bo_manager_func;
-   man->gpu_offset = 0;
+   man->gpu_offset = slot->gpu_offset;
man->flags = TTM_MEMTYPE_FLAG_FIXED |
 TTM_MEMTYPE_FLAG_MAPPABLE;
man->available_caching = TTM_PL_MASK_CACHING;
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 03/23] drm/qxl: simplify slot management

2019-01-18 Thread Gerd Hoffmann
Drop pointless indirection, remove the mem_slots array and index
variables, drop dynamic allocation.  Store memslots in qxl_device
instead.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h | 15 +
 drivers/gpu/drm/qxl/qxl_kms.c | 72 +--
 2 files changed, 36 insertions(+), 51 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 7eabf4a9ed..f9dddfe7d9 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -130,9 +130,11 @@ struct qxl_mman {
 };
 
 struct qxl_memslot {
+   int index;
+   const char  *name;
uint8_t generation;
uint64_tstart_phys_addr;
-   uint64_tend_phys_addr;
+   uint64_tsize;
uint64_thigh_bits;
 };
 
@@ -228,11 +230,8 @@ struct qxl_device {
 
unsigned int primary_created:1;
 
-   struct qxl_memslot  *mem_slots;
-   uint8_t n_mem_slots;
-
-   uint8_t main_mem_slot;
-   uint8_t surfaces_mem_slot;
+   struct qxl_memslot main_slot;
+   struct qxl_memslot surfaces_slot;
uint8_t slot_id_bits;
uint8_t slot_gen_bits;
uint64_tva_slot_mask;
@@ -312,8 +311,8 @@ static inline uint64_t
 qxl_bo_physical_address(struct qxl_device *qdev, struct qxl_bo *bo,
unsigned long offset)
 {
-   int slot_id = bo->type == QXL_GEM_DOMAIN_VRAM ? qdev->main_mem_slot : 
qdev->surfaces_mem_slot;
-   struct qxl_memslot *slot = &(qdev->mem_slots[slot_id]);
+   struct qxl_memslot *slot = bo->type == QXL_GEM_DOMAIN_VRAM
+   ? &qdev->main_slot : &qdev->surfaces_slot;
 
/* TODO - need to hold one of the locks to read tbo.offset */
return slot->high_bits | (bo->tbo.offset + offset);
diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c
index 15238a413f..a9288100ae 100644
--- a/drivers/gpu/drm/qxl/qxl_kms.c
+++ b/drivers/gpu/drm/qxl/qxl_kms.c
@@ -53,40 +53,46 @@ static bool qxl_check_device(struct qxl_device *qdev)
return true;
 }
 
-static void setup_hw_slot(struct qxl_device *qdev, int slot_index,
- struct qxl_memslot *slot)
+static void setup_hw_slot(struct qxl_device *qdev, struct qxl_memslot *slot)
 {
qdev->ram_header->mem_slot.mem_start = slot->start_phys_addr;
-   qdev->ram_header->mem_slot.mem_end = slot->end_phys_addr;
-   qxl_io_memslot_add(qdev, slot_index);
+   qdev->ram_header->mem_slot.mem_end = slot->start_phys_addr + slot->size;
+   qxl_io_memslot_add(qdev, qdev->rom->slots_start + slot->index);
 }
 
-static uint8_t setup_slot(struct qxl_device *qdev, uint8_t slot_index_offset,
-   unsigned long start_phys_addr, unsigned long end_phys_addr)
+static void setup_slot(struct qxl_device *qdev,
+  struct qxl_memslot *slot,
+  unsigned int slot_index,
+  const char *slot_name,
+  unsigned long start_phys_addr,
+  unsigned long size)
 {
uint64_t high_bits;
-   struct qxl_memslot *slot;
-   uint8_t slot_index;
 
-   slot_index = qdev->rom->slots_start + slot_index_offset;
-   slot = &qdev->mem_slots[slot_index];
+   slot->index = slot_index;
+   slot->name = slot_name;
slot->start_phys_addr = start_phys_addr;
-   slot->end_phys_addr = end_phys_addr;
+   slot->size = size;
 
-   setup_hw_slot(qdev, slot_index, slot);
+   setup_hw_slot(qdev, slot);
 
slot->generation = qdev->rom->slot_generation;
-   high_bits = slot_index << qdev->slot_gen_bits;
+   high_bits = (qdev->rom->slots_start + slot->index)
+   << qdev->slot_gen_bits;
high_bits |= slot->generation;
high_bits <<= (64 - (qdev->slot_gen_bits + qdev->slot_id_bits));
slot->high_bits = high_bits;
-   return slot_index;
+
+   DRM_INFO("slot %d (%s): base 0x%08lx, size 0x%08lx\n",
+slot->index, slot->name,
+(unsigned long)slot->start_phys_addr,
+(unsigned long)slot->size);
 }
 
 void qxl_reinit_memslots(struct qxl_device *qdev)
 {
-   setup_hw_slot(qdev, qdev->main_mem_slot, 
&qdev->mem_slots[qdev->main_mem_slot]);
-   setup_hw_slot(qdev, qdev->surfaces_mem_slot, 
&qdev->mem_slots[qdev->surfaces_mem_slot]);
+   setup_hw_slot(qdev, &qdev->main_slot);
+   setup_hw_slot(qdev, &qdev->surfaces_slot);
 }
 
 static void qxl_gc_work(struct work_struct *work)
@@ -231,22 +237,11 @@ int qxl_device_init(struct qxl_device *qdev,
}
/* TODO - slot initialization should happen on reset. where is our
 * reset handler? */
-   qdev->n_mem_slots = qdev->rom->slots_end;
qdev->slot_gen_bits = qdev->rom->slot_gen_bits;
qdev->slot_id_bits = qdev->rom->slot_id_bits;
qdev->va_slot_mask =
 

[PATCH v3 09/23] drm/qxl: use QXL_GEM_DOMAIN_SURFACE for dumb gem objects

2019-01-18 Thread Gerd Hoffmann
dumb buffers are used as qxl surfaces, so allocate them as
QXL_GEM_DOMAIN_SURFACE.  Should usually be allocated in
PRIV ttm domain then, so this reduces VRAM memory pressure.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_dumb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/qxl/qxl_dumb.c b/drivers/gpu/drm/qxl/qxl_dumb.c
index e3765739c3..272d19b677 100644
--- a/drivers/gpu/drm/qxl/qxl_dumb.c
+++ b/drivers/gpu/drm/qxl/qxl_dumb.c
@@ -59,7 +59,7 @@ int qxl_mode_dumb_create(struct drm_file *file_priv,
surf.stride = pitch;
surf.format = format;
r = qxl_gem_object_create_with_handle(qdev, file_priv,
- QXL_GEM_DOMAIN_VRAM,
+ QXL_GEM_DOMAIN_SURFACE,
  args->size, &surf, &qobj,
  &handle);
if (r)
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 05/23] drm/qxl: drop unused fields from struct qxl_device

2019-01-18 Thread Gerd Hoffmann
slot_id_bits and slot_gen_bits can be read directly from qxlrom instead.
va_slot_mask is never used anywhere.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h |  3 ---
 drivers/gpu/drm/qxl/qxl_kms.c | 10 ++
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index d015d4fff1..3ebe66abf2 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -232,9 +232,6 @@ struct qxl_device {
 
struct qxl_memslot main_slot;
struct qxl_memslot surfaces_slot;
-   uint8_t slot_id_bits;
-   uint8_t slot_gen_bits;
-   uint64_tva_slot_mask;
 
spinlock_t  release_lock;
struct idr  release_idr;
diff --git a/drivers/gpu/drm/qxl/qxl_kms.c b/drivers/gpu/drm/qxl/qxl_kms.c
index a9288100ae..3c1753667d 100644
--- a/drivers/gpu/drm/qxl/qxl_kms.c
+++ b/drivers/gpu/drm/qxl/qxl_kms.c
@@ -78,9 +78,9 @@ static void setup_slot(struct qxl_device *qdev,
 
slot->generation = qdev->rom->slot_generation;
high_bits = (qdev->rom->slots_start + slot->index)
-   << qdev->slot_gen_bits;
+   << qdev->rom->slot_gen_bits;
high_bits |= slot->generation;
-   high_bits <<= (64 - (qdev->slot_gen_bits + qdev->slot_id_bits));
+   high_bits <<= (64 - (qdev->rom->slot_gen_bits + 
qdev->rom->slot_id_bits));
slot->high_bits = high_bits;
 
DRM_INFO("slot %d (%s): base 0x%08lx, size 0x%08lx\n",
@@ -235,12 +235,6 @@ int qxl_device_init(struct qxl_device *qdev,
r = -ENOMEM;
goto cursor_ring_free;
}
-   /* TODO - slot initialization should happen on reset. where is our
-* reset handler? */
-   qdev->slot_gen_bits = qdev->rom->slot_gen_bits;
-   qdev->slot_id_bits = qdev->rom->slot_id_bits;
-   qdev->va_slot_mask =
-   (~(uint64_t)0) >> (qdev->slot_id_bits + qdev->slot_gen_bits);
 
idr_init(&qdev->release_idr);
spin_lock_init(&qdev->release_idr_lock);
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH v3 04/23] drm/qxl: change the way slot is detected

2019-01-18 Thread Gerd Hoffmann
From: Frediano Ziglio 

Instead of relaying on surface type use the actual placement.
This allow to have different placement for a single type of
surface.

Signed-off-by: Frediano Ziglio 

[ kraxel: rebased, adapted to upstream changes ]

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index f9dddfe7d9..d015d4fff1 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -311,7 +311,8 @@ static inline uint64_t
 qxl_bo_physical_address(struct qxl_device *qdev, struct qxl_bo *bo,
unsigned long offset)
 {
-   struct qxl_memslot *slot = bo->type == QXL_GEM_DOMAIN_VRAM
+   struct qxl_memslot *slot =
+   (bo->tbo.mem.mem_type == TTM_PL_VRAM)
? &qdev->main_slot : &qdev->surfaces_slot;
 
/* TODO - need to hold one of the locks to read tbo.offset */
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net V4] vhost: log dirty page correctly

2019-01-18 Thread Michael S. Tsirkin
On Wed, Jan 16, 2019 at 04:54:42PM +0800, Jason Wang wrote:
> Vhost dirty page logging API is designed to sync through GPA. But we
> try to log GIOVA when device IOTLB is enabled. This is wrong and may
> lead to missing data after migration.
> 
> To solve this issue, when logging with device IOTLB enabled, we will:
> 
> 1) reuse the device IOTLB translation result of GIOVA->HVA mapping to
>get HVA, for writable descriptor, get HVA through iovec. For used
>ring update, translate its GIOVA to HVA
> 2) traverse the GPA->HVA mapping to get the possible GPA and log
>through GPA. Pay attention this reverse mapping is not guaranteed
>to be unique, so we should log each possible GPA in this case.
> 
> This fix the failure of scp to guest during migration. In -next, we
> will probably support passing GIOVA->GPA instead of GIOVA->HVA.
> 
> Fixes: 6b1e6cc7855b ("vhost: new device IOTLB API")
> Reported-by: Jintack Lim 
> Cc: Jintack Lim 
> Signed-off-by: Jason Wang 

This one looks good to me

Acked-by: Michael S. Tsirkin 

> ---
> Changes from V3:
> - make sure each part of the hva was logged when crossing the boundary
>   of memory regions
> Changes from V2:
> - check and log the case of range overlap
> - remove unnecessary u64 cast
> - use smp_wmb() for the case of device IOTLB as well
> Changes from V1:
> - return error instead of warn
> ---
>  drivers/vhost/net.c   |  3 +-
>  drivers/vhost/vhost.c | 97 ---
>  drivers/vhost/vhost.h |  3 +-
>  3 files changed, 87 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 36f3d0f49e60..bca86bf7189f 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -1236,7 +1236,8 @@ static void handle_rx(struct vhost_net *net)
>   if (nvq->done_idx > VHOST_NET_BATCH)
>   vhost_net_signal_used(nvq);
>   if (unlikely(vq_log))
> - vhost_log_write(vq, vq_log, log, vhost_len);
> + vhost_log_write(vq, vq_log, log, vhost_len,
> + vq->iov, in);
>   total_len += vhost_len;
>   if (unlikely(vhost_exceeds_weight(++recv_pkts, total_len))) {
>   vhost_poll_queue(&vq->poll);
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 9f7942cbcbb2..babbb32b9bf0 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -1733,13 +1733,87 @@ static int log_write(void __user *log_base,
>   return r;
>  }
>  
> +static int log_write_hva(struct vhost_virtqueue *vq, u64 hva, u64 len)
> +{
> + struct vhost_umem *umem = vq->umem;
> + struct vhost_umem_node *u;
> + u64 start, end, l, min;
> + int r;
> + bool hit = false;
> +
> + while (len) {
> + min = len;
> + /* More than one GPAs can be mapped into a single HVA. So
> +  * iterate all possible umems here to be safe.
> +  */
> + list_for_each_entry(u, &umem->umem_list, link) {
> + if (u->userspace_addr > hva - 1 + len ||
> + u->userspace_addr - 1 + u->size < hva)
> + continue;
> + start = max(u->userspace_addr, hva);
> + end = min(u->userspace_addr - 1 + u->size,
> +   hva - 1 + len);
> + l = end - start + 1;
> + r = log_write(vq->log_base,
> +   u->start + start - u->userspace_addr,
> +   l);
> + if (r < 0)
> + return r;
> + hit = true;
> + min = min(l, min);
> + }
> +
> + if (!hit)
> + return -EFAULT;
> +
> + len -= min;
> + hva += min;
> + }
> +
> + return 0;
> +}
> +
> +static int log_used(struct vhost_virtqueue *vq, u64 used_offset, u64 len)
> +{
> + struct iovec iov[64];
> + int i, ret;
> +
> + if (!vq->iotlb)
> + return log_write(vq->log_base, vq->log_addr + used_offset, len);
> +
> + ret = translate_desc(vq, (uintptr_t)vq->used + used_offset,
> +  len, iov, 64, VHOST_ACCESS_WO);
> + if (ret)
> + return ret;
> +
> + for (i = 0; i < ret; i++) {
> + ret = log_write_hva(vq, (uintptr_t)iov[i].iov_base,
> + iov[i].iov_len);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
>  int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
> - unsigned int log_num, u64 len)
> + unsigned int log_num, u64 len, struct iovec *iov, int count)
>  {
>   int i, r;
>  
>   /* Make sure data written is seen before log. */
>   smp_wmb();
> +
> + if (vq->iotlb) {
> 

Re: [PATCH net V4] vhost: log dirty page correctly

2019-01-18 Thread David Miller
From: Jason Wang 
Date: Wed, 16 Jan 2019 16:54:42 +0800

> Vhost dirty page logging API is designed to sync through GPA. But we
> try to log GIOVA when device IOTLB is enabled. This is wrong and may
> lead to missing data after migration.
> 
> To solve this issue, when logging with device IOTLB enabled, we will:
> 
> 1) reuse the device IOTLB translation result of GIOVA->HVA mapping to
>get HVA, for writable descriptor, get HVA through iovec. For used
>ring update, translate its GIOVA to HVA
> 2) traverse the GPA->HVA mapping to get the possible GPA and log
>through GPA. Pay attention this reverse mapping is not guaranteed
>to be unique, so we should log each possible GPA in this case.
> 
> This fix the failure of scp to guest during migration. In -next, we
> will probably support passing GIOVA->GPA instead of GIOVA->HVA.
> 
> Fixes: 6b1e6cc7855b ("vhost: new device IOTLB API")
> Reported-by: Jintack Lim 
> Cc: Jintack Lim 
> Signed-off-by: Jason Wang 

Applied and queued up for -stable.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net 5/7] virtio_net: Don't process redirected XDP frames when XDP is disabled

2019-01-18 Thread Toshiaki Makita
On 2019/01/18 12:52, Jason Wang wrote:
> On 2019/1/18 上午9:56, Toshiaki Makita wrote:
>> On 2019/01/17 22:05, Jason Wang wrote:
>>> On 2019/1/17 下午8:53, Jason Wang wrote:
 On 2019/1/17 下午7:20, Toshiaki Makita wrote:
> Commit 8dcc5b0ab0ec ("virtio_net: fix ndo_xdp_xmit crash towards
> dev not
> ready for XDP") tried to avoid access to unexpected sq while XDP is
> disabled, but was not complete.
>
> There was a small window which causes out of bounds sq access in
> virtnet_xdp_xmit() while disabling XDP.
>
> An example case of
>    - curr_queue_pairs = 6 (2 for SKB and 4 for XDP)
>    - online_cpu_num = xdp_queue_paris = 4
> when XDP is enabled:
>
> CPU 0 CPU 1
> (Disabling XDP)   (Processing redirected XDP frames)
>
>     virtnet_xdp_xmit()
> virtnet_xdp_set()
>    _virtnet_set_queues()
>     set curr_queue_pairs (2)
>  check if rq->xdp_prog is not NULL
>  virtnet_xdp_sq(vi)
>   qp = curr_queue_pairs -
>    xdp_queue_pairs +
>    smp_processor_id()
>  = 2 - 4 + 1 = -1
>   sq = &vi->sq[qp] // out of bounds
> access
>     set xdp_queue_pairs (0)
>     rq->xdp_prog = NULL
>
> Basically we should not change curr_queue_pairs and xdp_queue_pairs
> while someone can read the values. Thus, when disabling XDP, assign
> NULL
> to rq->xdp_prog first, and wait for RCU grace period, then change
> xxx_queue_pairs.
> Note that we need to keep the current order when enabling XDP though.
>
> Fixes: 186b3c998c50 ("virtio-net: support XDP_REDIRECT")
> Signed-off-by: Toshiaki Makita 

 I wonder whether or not we could simply do:


 if (prog) {
>>>
>>> Should be !prog
>>>
>>>
  rcu_assign_pointer()

  synchronize_net()

 }

 set queues

 if (!prog) {
>>>
>>> Should be prog.
>> Either would work.
>>
>> With your suggestion the code will look like:
>>
>> ---
>> if (!prog) {
>> for (...) {
>>     rcu_assign_pointer();
>>     ...
>> }
>> synchronize_net();
>> }
>>
>> virtnet_set_queues();
>> netif_set_real_num_rx_queues();
>> vi->xdp_queue_pairs = xdp_qp;
>>
>> if (prog) {
>> for (...) {
>>     rcu_assign_pointer();
>>     ...
>> }
>> }
> 
> 
> Yes, I think this makes code more easier to be understand.
> 
> 
>> ---
>>
>> But strictly speaking, virtnet_set_queues() should not be necessary if
>> (prog != NULL && old_prog != NULL).
> 
> 
> Yes, but it was another possible 'issue'.
> 
> 
>> If you prefer this, I can modify it accordingly.
> 
> 
> I prefer to do this change.

OK, will do in v2.

-- 
Toshiaki Makita

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH v2] virtio_net: bulk free tx skbs

2019-01-18 Thread Michael S. Tsirkin
Use napi_consume_skb() to get bulk free.  Note that napi_consume_skb is
safe to call in a non-napi context as long as the napi_budget flag is
correct.

Signed-off-by: Michael S. Tsirkin 
---

Changes from v1:
rebase on master.

lightly tested on developer's box.

 drivers/net/virtio_net.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 023725086046..8fadd8eaf601 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1330,7 +1330,7 @@ static int virtnet_receive(struct receive_queue *rq, int 
budget,
return stats.packets;
 }
 
-static void free_old_xmit_skbs(struct send_queue *sq)
+static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi)
 {
struct sk_buff *skb;
unsigned int len;
@@ -1343,7 +1343,7 @@ static void free_old_xmit_skbs(struct send_queue *sq)
bytes += skb->len;
packets++;
 
-   dev_consume_skb_any(skb);
+   napi_consume_skb(skb, in_napi);
}
 
/* Avoid overhead when no packets have been processed
@@ -1369,7 +1369,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
return;
 
if (__netif_tx_trylock(txq)) {
-   free_old_xmit_skbs(sq);
+   free_old_xmit_skbs(sq, true);
__netif_tx_unlock(txq);
}
 
@@ -1445,7 +1445,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int 
budget)
struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq));
 
__netif_tx_lock(txq, raw_smp_processor_id());
-   free_old_xmit_skbs(sq);
+   free_old_xmit_skbs(sq, true);
__netif_tx_unlock(txq);
 
virtqueue_napi_complete(napi, sq->vq, 0);
@@ -1514,7 +1514,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct 
net_device *dev)
bool use_napi = sq->napi.weight;
 
/* Free up any pending old buffers before queueing new ones. */
-   free_old_xmit_skbs(sq);
+   free_old_xmit_skbs(sq, false);
 
if (use_napi && kick)
virtqueue_enable_cb_delayed(sq->vq);
@@ -1557,7 +1557,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct 
net_device *dev)
if (!use_napi &&
unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
/* More just got used, free them then recheck. */
-   free_old_xmit_skbs(sq);
+   free_old_xmit_skbs(sq, false);
if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
netif_start_subqueue(dev, qnum);
virtqueue_disable_cb(sq->vq);
-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net 5/7] virtio_net: Don't process redirected XDP frames when XDP is disabled

2019-01-18 Thread Jason Wang


On 2019/1/18 上午9:56, Toshiaki Makita wrote:

On 2019/01/17 22:05, Jason Wang wrote:

On 2019/1/17 下午8:53, Jason Wang wrote:

On 2019/1/17 下午7:20, Toshiaki Makita wrote:

Commit 8dcc5b0ab0ec ("virtio_net: fix ndo_xdp_xmit crash towards dev not
ready for XDP") tried to avoid access to unexpected sq while XDP is
disabled, but was not complete.

There was a small window which causes out of bounds sq access in
virtnet_xdp_xmit() while disabling XDP.

An example case of
   - curr_queue_pairs = 6 (2 for SKB and 4 for XDP)
   - online_cpu_num = xdp_queue_paris = 4
when XDP is enabled:

CPU 0 CPU 1
(Disabling XDP)   (Processing redirected XDP frames)

    virtnet_xdp_xmit()
virtnet_xdp_set()
   _virtnet_set_queues()
    set curr_queue_pairs (2)
     check if rq->xdp_prog is not NULL
     virtnet_xdp_sq(vi)
  qp = curr_queue_pairs -
   xdp_queue_pairs +
   smp_processor_id()
     = 2 - 4 + 1 = -1
  sq = &vi->sq[qp] // out of bounds
access
    set xdp_queue_pairs (0)
    rq->xdp_prog = NULL

Basically we should not change curr_queue_pairs and xdp_queue_pairs
while someone can read the values. Thus, when disabling XDP, assign NULL
to rq->xdp_prog first, and wait for RCU grace period, then change
xxx_queue_pairs.
Note that we need to keep the current order when enabling XDP though.

Fixes: 186b3c998c50 ("virtio-net: support XDP_REDIRECT")
Signed-off-by: Toshiaki Makita 


I wonder whether or not we could simply do:


if (prog) {


Should be !prog



     rcu_assign_pointer()

     synchronize_net()

}

set queues

if (!prog) {


Should be prog.

Either would work.

With your suggestion the code will look like:

---
if (!prog) {
for (...) {
rcu_assign_pointer();
...
}
synchronize_net();
}

virtnet_set_queues();
netif_set_real_num_rx_queues();
vi->xdp_queue_pairs = xdp_qp;

if (prog) {
for (...) {
rcu_assign_pointer();
...
}
}



Yes, I think this makes code more easier to be understand.



---

But strictly speaking, virtnet_set_queues() should not be necessary if
(prog != NULL && old_prog != NULL).



Yes, but it was another possible 'issue'.



If you prefer this, I can modify it accordingly.



I prefer to do this change.

Thanks

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH net 2/7] virtio_net: Don't call free_old_xmit_skbs for xdp_frames

2019-01-18 Thread Jason Wang


On 2019/1/18 上午9:44, Toshiaki Makita wrote:

On 2019/01/17 21:39, Jason Wang wrote:

On 2019/1/17 下午7:20, Toshiaki Makita wrote:

When napi_tx is enabled, virtnet_poll_cleantx() called
free_old_xmit_skbs() even for xdp send queue.
This is bogus since the queue has xdp_frames, not sk_buffs, thus mangled
device tx bytes counters because skb->len is meaningless value, and even
triggered oops due to general protection fault on freeing them.

Since xdp send queues do not aquire locks, old xdp_frames should be
freed only in virtnet_xdp_xmit(), so just skip free_old_xmit_skbs() for
xdp send queues.

Similarly virtnet_poll_tx() called free_old_xmit_skbs(). This NAPI
handler is called even without calling start_xmit() because cb for tx is
by default enabled. Once the handler is called, it enabled the cb again,
and then the handler would be called again. We don't need this handler
for XDP, so don't enable cb as well as not calling free_old_xmit_skbs().

Also, we need to disable tx NAPI when disabling XDP, so
virtnet_poll_tx() can safely access curr_queue_pairs and
xdp_queue_pairs, which are not atomically updated while disabling XDP.


I suggest to split this into another patch or squash this part to patch 1.

This part is for invocation of is_xdp_raw_buffer_queue() from
virtnet_poll_tx(), which is added in this patch, so I'm thinking it's
more natural to keep this hunk in this patch.



I see.

Acked-by: Jason Wang 

Thanks

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH net 0/7] virtio_net: Fix problems around XDP tx and napi_tx

2019-01-18 Thread Toshiaki Makita
On 2019/01/17 23:55, Michael S. Tsirkin wrote:
> On Thu, Jan 17, 2019 at 08:20:38PM +0900, Toshiaki Makita wrote:
>> While I'm looking into how to account standard tx counters on XDP tx
>> processing, I found several bugs around XDP tx and napi_tx.
>>
>> Patch1: Fix oops on error path. Patch2 depends on this.
>> Patch2: Fix memory corruption on freeing xdp_frames with napi_tx enabled.
>> Patch3: Minor fix patch5 depends on.
>> Patch4: Fix memory corruption on processing xdp_frames when XDP is disabled.
>>   Also patch5 depends on this.
>> Patch5: Fix memory corruption on processing xdp_frames while XDP is being
>>   disabled.
>> Patch6: Minor fix patch7 depends on.
>> Patch7: Fix memory corruption on freeing sk_buff or xdp_frames when a normal
>>   queue is reused for XDP and vise versa.
>>
>> Signed-off-by: Toshiaki Makita 
> 
> Series:
> 
> Acked-by: Michael S. Tsirkin 

Thanks for the review.

> 
> I guess we need this stuff on stable?

I think so.

> I'm especially happy with Patch7 as it makes my BQL
> work a bit easier.
> 
>> Toshiaki Makita (7):
>>   virtio_net: Don't enable NAPI when interface is down
>>   virtio_net: Don't call free_old_xmit_skbs for xdp_frames
>>   virtio_net: Fix not restoring real_num_rx_queues
>>   virtio_net: Fix out of bounds access of sq
>>   virtio_net: Don't process redirected XDP frames when XDP is disabled
>>   virtio_net: Use xdp_return_frame to free xdp_frames on destroying vqs
>>   virtio_net: Differentiate sk_buff and xdp_frame on freeing
>>
>>  drivers/net/virtio_net.c | 154 
>> +--
>>  1 file changed, 109 insertions(+), 45 deletions(-)
>>
>> -- 
>> 1.8.3.1
>>
> 
> 

-- 
Toshiaki Makita

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH net 5/7] virtio_net: Don't process redirected XDP frames when XDP is disabled

2019-01-18 Thread Toshiaki Makita
On 2019/01/17 22:05, Jason Wang wrote:
> On 2019/1/17 下午8:53, Jason Wang wrote:
>> On 2019/1/17 下午7:20, Toshiaki Makita wrote:
>>> Commit 8dcc5b0ab0ec ("virtio_net: fix ndo_xdp_xmit crash towards dev not
>>> ready for XDP") tried to avoid access to unexpected sq while XDP is
>>> disabled, but was not complete.
>>>
>>> There was a small window which causes out of bounds sq access in
>>> virtnet_xdp_xmit() while disabling XDP.
>>>
>>> An example case of
>>>   - curr_queue_pairs = 6 (2 for SKB and 4 for XDP)
>>>   - online_cpu_num = xdp_queue_paris = 4
>>> when XDP is enabled:
>>>
>>> CPU 0 CPU 1
>>> (Disabling XDP)   (Processing redirected XDP frames)
>>>
>>>    virtnet_xdp_xmit()
>>> virtnet_xdp_set()
>>>   _virtnet_set_queues()
>>>    set curr_queue_pairs (2)
>>>     check if rq->xdp_prog is not NULL
>>>     virtnet_xdp_sq(vi)
>>>  qp = curr_queue_pairs -
>>>   xdp_queue_pairs +
>>>   smp_processor_id()
>>>     = 2 - 4 + 1 = -1
>>>  sq = &vi->sq[qp] // out of bounds
>>> access
>>>    set xdp_queue_pairs (0)
>>>    rq->xdp_prog = NULL
>>>
>>> Basically we should not change curr_queue_pairs and xdp_queue_pairs
>>> while someone can read the values. Thus, when disabling XDP, assign NULL
>>> to rq->xdp_prog first, and wait for RCU grace period, then change
>>> xxx_queue_pairs.
>>> Note that we need to keep the current order when enabling XDP though.
>>>
>>> Fixes: 186b3c998c50 ("virtio-net: support XDP_REDIRECT")
>>> Signed-off-by: Toshiaki Makita 
>>
>>
>> I wonder whether or not we could simply do:
>>
>>
>> if (prog) {
> 
> 
> Should be !prog
> 
> 
>>
>>     rcu_assign_pointer()
>>
>>     synchronize_net()
>>
>> }
>>
>> set queues
>>
>> if (!prog) {
> 
> 
> Should be prog.

Either would work.

With your suggestion the code will look like:

---
if (!prog) {
for (...) {
rcu_assign_pointer();
...
}
synchronize_net();
}

virtnet_set_queues();
netif_set_real_num_rx_queues();
vi->xdp_queue_pairs = xdp_qp;

if (prog) {
for (...) {
rcu_assign_pointer();
...
}
}
---

But strictly speaking, virtnet_set_queues() should not be necessary if
(prog != NULL && old_prog != NULL).
If you prefer this, I can modify it accordingly.

-- 
Toshiaki Makita

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Re: [PATCH net 2/7] virtio_net: Don't call free_old_xmit_skbs for xdp_frames

2019-01-18 Thread Toshiaki Makita
On 2019/01/17 21:39, Jason Wang wrote:
> On 2019/1/17 下午7:20, Toshiaki Makita wrote:
>> When napi_tx is enabled, virtnet_poll_cleantx() called
>> free_old_xmit_skbs() even for xdp send queue.
>> This is bogus since the queue has xdp_frames, not sk_buffs, thus mangled
>> device tx bytes counters because skb->len is meaningless value, and even
>> triggered oops due to general protection fault on freeing them.
>>
>> Since xdp send queues do not aquire locks, old xdp_frames should be
>> freed only in virtnet_xdp_xmit(), so just skip free_old_xmit_skbs() for
>> xdp send queues.
>>
>> Similarly virtnet_poll_tx() called free_old_xmit_skbs(). This NAPI
>> handler is called even without calling start_xmit() because cb for tx is
>> by default enabled. Once the handler is called, it enabled the cb again,
>> and then the handler would be called again. We don't need this handler
>> for XDP, so don't enable cb as well as not calling free_old_xmit_skbs().
>>
>> Also, we need to disable tx NAPI when disabling XDP, so
>> virtnet_poll_tx() can safely access curr_queue_pairs and
>> xdp_queue_pairs, which are not atomically updated while disabling XDP.
> 
> 
> I suggest to split this into another patch or squash this part to patch 1.

This part is for invocation of is_xdp_raw_buffer_queue() from
virtnet_poll_tx(), which is added in this patch, so I'm thinking it's
more natural to keep this hunk in this patch.

-- 
Toshiaki Makita

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

[PATCH] drm: Split out drm_probe_helper.h

2019-01-18 Thread Daniel Vetter
Having the probe helper stuff (which pretty much everyone needs) in
the drm_crtc_helper.h file (which atomic drivers should never need) is
confusing. Split them out.

To make sure I actually achieved the goal here I went through all
drivers. And indeed, all atomic drivers are now free of
drm_crtc_helper.h includes.

v2: Make it compile. There was so much compile fail on arm drivers
that I figured I'll better not include any of the acks on v1.

v3: Massive rebase because i915 has lost a lot of drmP.h includes, but
not all: Through drm_crtc_helper.h > drm_modeset_helper.h -> drmP.h
there was still one, which this patch largely removes. Which means
rolling out lots more includes all over.

This will also conflict with ongoing drmP.h cleanup by others I
expect.

v3: Rebase on top of atomic bochs.

v4: Review from Laurent for bridge/rcar/omap/shmob/core bits:
- (re)move some of the added includes, use the better include files in
  other places (all suggested from Laurent adopted unchanged).
- sort alphabetically

v5: Actually try to sort them, and while at it, sort all the ones I
touch.

v6: Rebase onto i915 changes.

Cc: Sam Ravnborg 
Cc: Jani Nikula 
Cc: Laurent Pinchart 
Acked-by: Rodrigo Vivi 
Acked-by: Benjamin Gaignard 
Acked-by: Jani Nikula 
Acked-by: Neil Armstrong 
Acked-by: Oleksandr Andrushchenko 
Acked-by: CK Hu 
Acked-by: Alex Deucher 
Acked-by: Sam Ravnborg 
Reviewed-by: Laurent Pinchart 
Acked-by: Liviu Dudau 
Signed-off-by: Daniel Vetter 
Cc: linux-arm-ker...@lists.infradead.org
Cc: virtualization@lists.linux-foundation.org
Cc: etna...@lists.freedesktop.org
Cc: linux-samsung-...@vger.kernel.org
Cc: intel-...@lists.freedesktop.org
Cc: linux-media...@lists.infradead.org
Cc: linux-amlo...@lists.infradead.org
Cc: linux-arm-...@vger.kernel.org
Cc: freedr...@lists.freedesktop.org
Cc: nouv...@lists.freedesktop.org
Cc: spice-de...@lists.freedesktop.org
Cc: amd-...@lists.freedesktop.org
Cc: linux-renesas-...@vger.kernel.org
Cc: linux-rockc...@lists.infradead.org
Cc: linux-st...@st-md-mailman.stormreply.com
Cc: linux-te...@vger.kernel.org
Cc: xen-de...@lists.xen.org
---
 .../gpu/drm/amd/amdgpu/amdgpu_connectors.c|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c|  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h  |  1 +
 .../amd/display/amdgpu_dm/amdgpu_dm_helpers.c |  2 +-
 .../amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c  |  2 +-
 .../display/amdgpu_dm/amdgpu_dm_services.c|  2 +-
 drivers/gpu/drm/arc/arcpgu_crtc.c |  2 +-
 drivers/gpu/drm/arc/arcpgu_drv.c  |  6 ++---
 drivers/gpu/drm/arc/arcpgu_sim.c  |  2 +-
 drivers/gpu/drm/arm/hdlcd_crtc.c  |  4 +--
 drivers/gpu/drm/arm/hdlcd_drv.c   |  4 +--
 drivers/gpu/drm/arm/malidp_crtc.c |  2 +-
 drivers/gpu/drm/arm/malidp_drv.c  |  2 +-
 drivers/gpu/drm/arm/malidp_mw.c   |  2 +-
 drivers/gpu/drm/armada/armada_510.c   |  2 +-
 drivers/gpu/drm/armada/armada_crtc.c  |  2 +-
 drivers/gpu/drm/armada/armada_crtc.h  |  2 ++
 drivers/gpu/drm/armada/armada_drv.c   |  2 +-
 drivers/gpu/drm/armada/armada_fb.c|  2 +-
 drivers/gpu/drm/ast/ast_drv.c |  1 +
 drivers/gpu/drm/ast/ast_mode.c|  1 +
 .../gpu/drm/atmel-hlcdc/atmel_hlcdc_crtc.c|  2 +-
 drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.h  |  2 +-
 drivers/gpu/drm/bochs/bochs_drv.c |  1 +
 drivers/gpu/drm/bochs/bochs_kms.c |  1 +
 drivers/gpu/drm/bridge/adv7511/adv7511.h  |  4 ++-
 drivers/gpu/drm/bridge/adv7511/adv7511_drv.c  |  1 +
 drivers/gpu/drm/bridge/analogix-anx78xx.c |  2 +-
 .../drm/bridge/analogix/analogix_dp_core.c|  2 +-
 drivers/gpu/drm/bridge/cdns-dsi.c |  2 +-
 drivers/gpu/drm/bridge/dumb-vga-dac.c |  2 +-
 .../bridge/megachips-stdp-ge-b850v3-fw.c  |  2 +-
 drivers/gpu/drm/bridge/nxp-ptn3460.c  |  2 +-
 drivers/gpu/drm/bridge/panel.c|  2 +-
 drivers/gpu/drm/bridge/parade-ps8622.c|  2 +-
 drivers/gpu/drm/bridge/sii902x.c  |  2 +-
 drivers/gpu/drm/bridge/synopsys/dw-hdmi.c |  2 +-
 drivers/gpu/drm/bridge/synopsys/dw-mipi-dsi.c |  2 +-
 drivers/gpu/drm/bridge/tc358764.c |  2 +-
 drivers/gpu/drm/bridge/tc358767.c |  2 +-
 drivers/gpu/drm/bridge/ti-sn65dsi86.c |  2 +-
 drivers/gpu/drm/bridge/ti-tfp410.c|  2 +-
 drivers/gpu/drm/cirrus/cirrus_drv.c   |  1 +
 drivers/gpu/drm/cirrus/cirrus_mode.c  |  1 +
 drivers/gpu/drm/drm_atomic_helper.c   |  1 -
 drivers/gpu/drm/drm_dp_mst_topology.c |  2 +-
 drivers/gpu/drm/drm_modeset_helper.c  |  2 +-
 drivers/gpu/drm/drm_probe_helper.c|  2 +-
 drivers/gpu/drm/drm_simple_kms_helper.c   |  2 +-
 drivers/gpu/drm/etnaviv/etnaviv_drv.h |  1 -
 drivers/gpu/drm/exynos/exynos_dp.c|  3 ++-
 drivers/gpu/drm/exynos/exynos_drm