[PATCH] virtio_balloon: export huge page allocation statistics

2018-02-16 Thread Jonathan Helman
Export statistics for successful and failed huge page allocations
from the virtio balloon driver. These 2 stats come directly from
the vm_events HTLB_BUDDY_PGALLOC and HTLB_BUDDY_PGALLOC_FAIL.

Signed-off-by: Jonathan Helman 
---
 drivers/virtio/virtio_balloon.c | 6 ++
 include/uapi/linux/virtio_balloon.h | 4 +++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index dfe5684..6b237e3 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -272,6 +272,12 @@ static unsigned int update_balloon_stats(struct 
virtio_balloon *vb)
pages_to_bytes(events[PSWPOUT]));
update_stat(vb, idx++, VIRTIO_BALLOON_S_MAJFLT, events[PGMAJFAULT]);
update_stat(vb, idx++, VIRTIO_BALLOON_S_MINFLT, events[PGFAULT]);
+#ifdef CONFIG_HUGETLB_PAGE
+   update_stat(vb, idx++, VIRTIO_BALLOON_S_HTLB_PGALLOC,
+   events[HTLB_BUDDY_PGALLOC]);
+   update_stat(vb, idx++, VIRTIO_BALLOON_S_HTLB_PGFAIL,
+   events[HTLB_BUDDY_PGALLOC_FAIL]);
+#endif
 #endif
update_stat(vb, idx++, VIRTIO_BALLOON_S_MEMFREE,
pages_to_bytes(i.freeram));
diff --git a/include/uapi/linux/virtio_balloon.h 
b/include/uapi/linux/virtio_balloon.h
index 4e8b830..e3e8071 100644
--- a/include/uapi/linux/virtio_balloon.h
+++ b/include/uapi/linux/virtio_balloon.h
@@ -53,7 +53,9 @@ struct virtio_balloon_config {
 #define VIRTIO_BALLOON_S_MEMTOT   5   /* Total amount of memory */
 #define VIRTIO_BALLOON_S_AVAIL6   /* Available memory as in /proc */
 #define VIRTIO_BALLOON_S_CACHES   7   /* Disk caches */
-#define VIRTIO_BALLOON_S_NR   8
+#define VIRTIO_BALLOON_S_HTLB_PGALLOC  8  /* Number of htlb pgalloc successes 
*/
+#define VIRTIO_BALLOON_S_HTLB_PGFAIL   9  /* Number of htlb pgalloc failures */
+#define VIRTIO_BALLOON_S_NR   10
 
 /*
  * Memory statistics structure.
-- 
1.8.3.1

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests

2018-02-16 Thread Herbert Xu
On Fri, Feb 16, 2018 at 04:36:56PM +0100, Corentin Labbe wrote:
>
> As mentionned in the cover letter, all patchs (except documentation one) 
> should be squashed.
> A kbuild robot reported a build error on cryptodev due to this.

It's too late now.  In future if you want the patches to be squashed
then please send them in one email.

Thanks,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [RFC PATCH v3 2/3] virtio_net: Extend virtio to use VF datapath when available

2018-02-16 Thread Jakub Kicinski
On Fri, 16 Feb 2018 10:11:21 -0800, Sridhar Samudrala wrote:
> This patch enables virtio_net to switch over to a VF datapath when a VF
> netdev is present with the same MAC address. It allows live migration
> of a VM with a direct attached VF without the need to setup a bond/team
> between a VF and virtio net device in the guest.
> 
> The hypervisor needs to enable only one datapath at any time so that
> packets don't get looped back to the VM over the other datapath. When a VF
> is plugged, the virtio datapath link state can be marked as down. The
> hypervisor needs to unplug the VF device from the guest on the source host
> and reset the MAC filter of the VF to initiate failover of datapath to
> virtio before starting the migration. After the migration is completed,
> the destination hypervisor sets the MAC filter on the VF and plugs it back
> to the guest to switch over to VF datapath.
> 
> When BACKUP feature is enabled, an additional netdev(bypass netdev) is
> created that acts as a master device and tracks the state of the 2 lower
> netdevs. The original virtio_net netdev is marked as 'backup' netdev and a
> passthru device with the same MAC is registered as 'active' netdev.
> 
> This patch is based on the discussion initiated by Jesse on this thread.
> https://marc.info/?l=linux-virtualization=151189725224231=2
> 
> Signed-off-by: Sridhar Samudrala 
> Signed-off-by: Alexander Duyck  

> +static void
> +virtnet_bypass_get_stats(struct net_device *dev,
> +  struct rtnl_link_stats64 *stats)
> +{
> + struct virtnet_bypass_info *vbi = netdev_priv(dev);
> + const struct rtnl_link_stats64 *new;
> + struct rtnl_link_stats64 temp;
> + struct net_device *child_netdev;
> +
> + spin_lock(>stats_lock);
> + memcpy(stats, >bypass_stats, sizeof(*stats));
> +
> + rcu_read_lock();
> +
> + child_netdev = rcu_dereference(vbi->active_netdev);
> + if (child_netdev) {
> + new = dev_get_stats(child_netdev, );
> + virtnet_bypass_fold_stats(stats, new, >active_stats);
> + memcpy(>active_stats, new, sizeof(*new));
> + }
> +
> + child_netdev = rcu_dereference(vbi->backup_netdev);
> + if (child_netdev) {
> + new = dev_get_stats(child_netdev, );
> + virtnet_bypass_fold_stats(stats, new, >backup_stats);
> + memcpy(>backup_stats, new, sizeof(*new));
> + }
> +
> + rcu_read_unlock();
> +
> + memcpy(>bypass_stats, stats, sizeof(*stats));
> + spin_unlock(>stats_lock);
> +}
> +
> +static int virtnet_bypass_change_mtu(struct net_device *dev, int new_mtu)
> +{
> + struct virtnet_bypass_info *vbi = netdev_priv(dev);
> + struct net_device *child_netdev;
> + int ret = 0;
> +
> + child_netdev = rcu_dereference(vbi->active_netdev);
> + if (child_netdev) {
> + ret = dev_set_mtu(child_netdev, new_mtu);
> + if (ret)
> + return ret;
> + }
> +
> + child_netdev = rcu_dereference(vbi->backup_netdev);
> + if (child_netdev) {
> + ret = dev_set_mtu(child_netdev, new_mtu);
> + if (ret)
> + netdev_err(child_netdev,
> +"Unexpected failure to set mtu to %d\n",
> +new_mtu);

You should probably unwind if set fails on one of the legs.

> + }
> +
> + dev->mtu = new_mtu;
> + return 0;
> +}

nit: stats, mtu, all those mundane things are implemented in team
 already.  If we had this as kernel-internal team mode we wouldn't
 have to reimplement them...  You probably did investigate that
 option, for my edification, would you mind saying what the
 challenges/downsides were?

> +static struct net_device *
> +get_virtnet_bypass_bymac(struct net *net, const u8 *mac)
> +{
> + struct net_device *dev;
> +
> + ASSERT_RTNL();
> +
> + for_each_netdev(net, dev) {
> + if (dev->netdev_ops != _bypass_netdev_ops)
> + continue;   /* not a virtnet_bypass device */

Is there anything inherently wrong with enslaving another virtio dev
now?  I was expecting something like a hash map to map MAC addr ->
master and then one can check if dev is already enslaved to that master.
Just a random thought, I'm probably missing something...

> + if (ether_addr_equal(mac, dev->perm_addr))
> + return dev;
> + }
> +
> + return NULL;
> +}
> +
> +static struct net_device *
> +get_virtnet_bypass_byref(struct net_device *child_netdev)
> +{
> + struct net *net = dev_net(child_netdev);
> + struct net_device *dev;
> +
> + ASSERT_RTNL();
> +
> + for_each_netdev(net, dev) {
> + struct virtnet_bypass_info *vbi;
> +
> + if (dev->netdev_ops != _bypass_netdev_ops)
> + continue;   /* not a virtnet_bypass device */
> +
> + vbi = 

Re: [RFC PATCH v3 0/3] Enable virtio_net to act as a backup for a passthru device

2018-02-16 Thread Jakub Kicinski
On Fri, 16 Feb 2018 10:11:19 -0800, Sridhar Samudrala wrote:
> Ppatch 2 is in response to the community request for a 3 netdev
> solution.  However, it creates some issues we'll get into in a moment.
> It extends virtio_net to use alternate datapath when available and
> registered. When BACKUP feature is enabled, virtio_net driver creates
> an additional 'bypass' netdev that acts as a master device and controls
> 2 slave devices.  The original virtio_net netdev is registered as
> 'backup' netdev and a passthru/vf device with the same MAC gets
> registered as 'active' netdev. Both 'bypass' and 'backup' netdevs are
> associated with the same 'pci' device.  The user accesses the network
> interface via 'bypass' netdev. The 'bypass' netdev chooses 'active' netdev
> as default for transmits when it is available with link up and running.

Thank you do doing this.

> We noticed a couple of issues with this approach during testing.
> - As both 'bypass' and 'backup' netdevs are associated with the same
>   virtio pci device, udev tries to rename both of them with the same name
>   and the 2nd rename will fail. This would be OK as long as the first netdev
>   to be renamed is the 'bypass' netdev, but the order in which udev gets
>   to rename the 2 netdevs is not reliable. 

Out of curiosity - why do you link the master netdev to the virtio
struct device?

FWIW two solutions that immediately come to mind is to export "backup"
as phys_port_name of the backup virtio link and/or assign a name to the
master like you are doing already.  I think team uses team%d and bond
uses bond%d, soft naming of master devices seems quite natural in this
case.

IMHO phys_port_name == "backup" if BACKUP bit is set on slave virtio
link is quite neat.

> - When the 'active' netdev is unplugged OR not present on a destination
>   system after live migration, the user will see 2 virtio_net netdevs.

That's necessary and expected, all configuration applies to the master
so master must exist.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v21 1/5] xbitmap: Introduce xbitmap

2018-02-16 Thread Matthew Wilcox
On Fri, Feb 16, 2018 at 11:45:51PM +0200, Andy Shevchenko wrote:
> Now, the question about test case. Why do you heavily use BUG_ON?
> Isn't resulting statistics enough?

No.  If any of those tests fail, we want to stop dead.  They'll lead to
horrendous bugs throughout the kernel if they're wrong.  I think more of
the in-kernel test suite should stop dead instead of printing a warning.
Would you want to boot a machine which has a known bug in the page cache,
for example?
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v21 1/5] xbitmap: Introduce xbitmap

2018-02-16 Thread Andy Shevchenko
On Fri, Feb 16, 2018 at 8:30 PM, Matthew Wilcox  wrote:
> On Fri, Feb 16, 2018 at 07:44:50PM +0200, Andy Shevchenko wrote:
>> On Tue, Jan 9, 2018 at 1:10 PM, Wei Wang  wrote:
>> > From: Matthew Wilcox 
>> >
>> > The eXtensible Bitmap is a sparse bitmap representation which is
>> > efficient for set bits which tend to cluster. It supports up to
>> > 'unsigned long' worth of bits.
>>
>> >  lib/xbitmap.c| 444 
>> > +++
>>
>> Please, split tests to a separate module.
>
> Hah, I just did this two days ago!  I didn't publish it yet, but I also made
> it compile both in userspace and as a kernel module.
>
> It's the top two commits here:
>
> http://git.infradead.org/users/willy/linux-dax.git/shortlog/refs/heads/xarray-2018-02-12
>

Thanks!

> Note this is a complete rewrite compared to the version presented here; it
> sits on top of the XArray and no longer has a preload interface.  It has a
> superset of the IDA functionality.

Noted.

Now, the question about test case. Why do you heavily use BUG_ON?
Isn't resulting statistics enough?

See how other lib/test_* modules do.

-- 
With Best Regards,
Andy Shevchenko
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v21 1/5] xbitmap: Introduce xbitmap

2018-02-16 Thread Matthew Wilcox
On Fri, Feb 16, 2018 at 07:44:50PM +0200, Andy Shevchenko wrote:
> On Tue, Jan 9, 2018 at 1:10 PM, Wei Wang  wrote:
> > From: Matthew Wilcox 
> >
> > The eXtensible Bitmap is a sparse bitmap representation which is
> > efficient for set bits which tend to cluster. It supports up to
> > 'unsigned long' worth of bits.
> 
> >  lib/xbitmap.c| 444 
> > +++
> 
> Please, split tests to a separate module.

Hah, I just did this two days ago!  I didn't publish it yet, but I also made
it compile both in userspace and as a kernel module.  

It's the top two commits here:

http://git.infradead.org/users/willy/linux-dax.git/shortlog/refs/heads/xarray-2018-02-12

Note this is a complete rewrite compared to the version presented here; it
sits on top of the XArray and no longer has a preload interface.  It has a
superset of the IDA functionality.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH v3 3/3] virtio_net: Enable alternate datapath without creating an additional netdev

2018-02-16 Thread Sridhar Samudrala
This patch addresses the issues that were seen with the 3 netdev model by
avoiding the creation of an additional netdev. Instead the bypass state
information is tracked in the original netdev and a different set of
ndo_ops and ethtool_ops are used when BACKUP feature is enabled.

Signed-off-by: Sridhar Samudrala 
Reviewed-by: Alexander Duyck  
---
 drivers/net/virtio_net.c | 283 +--
 1 file changed, 101 insertions(+), 182 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 14679806c1b1..c85b2949f151 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -154,7 +154,7 @@ struct virtnet_bypass_info {
struct net_device __rcu *active_netdev;
 
/* virtio_net netdev */
-   struct net_device __rcu *backup_netdev;
+   struct net_device *backup_netdev;
 
/* active netdev stats */
struct rtnl_link_stats64 active_stats;
@@ -229,8 +229,8 @@ struct virtnet_info {
 
unsigned long guest_offloads;
 
-   /* upper netdev created when BACKUP feature enabled */
-   struct net_device *bypass_netdev;
+   /* bypass state maintained when BACKUP feature is enabled */
+   struct virtnet_bypass_info *vbi;
 };
 
 struct padded_vnet_hdr {
@@ -2285,6 +2285,22 @@ static bool virtnet_bypass_xmit_ready(struct net_device 
*dev)
return netif_running(dev) && netif_carrier_ok(dev);
 }
 
+static bool virtnet_bypass_active_ready(struct net_device *dev)
+{
+   struct virtnet_info *vi = netdev_priv(dev);
+   struct virtnet_bypass_info *vbi = vi->vbi;
+   struct net_device *active;
+
+   if (!vbi)
+   return false;
+
+   active = rcu_dereference(vbi->active_netdev);
+   if (!active || !virtnet_bypass_xmit_ready(active))
+   return false;
+
+   return true;
+}
+
 static void virtnet_config_changed_work(struct work_struct *work)
 {
struct virtnet_info *vi =
@@ -2312,7 +2328,7 @@ static void virtnet_config_changed_work(struct 
work_struct *work)
virtnet_update_settings(vi);
netif_carrier_on(vi->dev);
netif_tx_wake_all_queues(vi->dev);
-   } else {
+   } else if (!virtnet_bypass_active_ready(vi->dev)) {
netif_carrier_off(vi->dev);
netif_tx_stop_all_queues(vi->dev);
}
@@ -2501,7 +2517,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
 
if (vi->has_cvq) {
vi->cvq = vqs[total_vqs - 1];
-   if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VLAN))
+   if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_CTRL_VLAN) &&
+   !virtio_has_feature(vi->vdev, VIRTIO_NET_F_BACKUP))
vi->dev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
}
 
@@ -2690,62 +2707,54 @@ virtnet_bypass_child_open(struct net_device *dev,
 
 static int virtnet_bypass_open(struct net_device *dev)
 {
-   struct virtnet_bypass_info *vbi = netdev_priv(dev);
+   struct virtnet_info *vi = netdev_priv(dev);
+   struct virtnet_bypass_info *vbi = vi->vbi;
struct net_device *child_netdev;
-
-   netif_carrier_off(dev);
-   netif_tx_wake_all_queues(dev);
+   int err;
 
child_netdev = rtnl_dereference(vbi->active_netdev);
if (child_netdev)
virtnet_bypass_child_open(dev, child_netdev);
 
-   child_netdev = rtnl_dereference(vbi->backup_netdev);
-   if (child_netdev)
-   virtnet_bypass_child_open(dev, child_netdev);
+   err = virtnet_open(dev);
+   if (err < 0) {
+   dev_close(child_netdev);
+   return err;
+   }
 
return 0;
 }
 
 static int virtnet_bypass_close(struct net_device *dev)
 {
-   struct virtnet_bypass_info *vi = netdev_priv(dev);
+   struct virtnet_info *vi = netdev_priv(dev);
+   struct virtnet_bypass_info *vbi = vi->vbi;
struct net_device *child_netdev;
 
-   netif_tx_disable(dev);
+   virtnet_close(dev);
 
-   child_netdev = rtnl_dereference(vi->active_netdev);
-   if (child_netdev)
-   dev_close(child_netdev);
+   if (!vbi)
+   goto done;
 
-   child_netdev = rtnl_dereference(vi->backup_netdev);
+   child_netdev = rtnl_dereference(vbi->active_netdev);
if (child_netdev)
dev_close(child_netdev);
 
+done:
return 0;
 }
 
-static netdev_tx_t
-virtnet_bypass_drop_xmit(struct sk_buff *skb, struct net_device *dev)
-{
-   atomic_long_inc(>tx_dropped);
-   dev_kfree_skb_any(skb);
-   return NETDEV_TX_OK;
-}
-
 static netdev_tx_t
 virtnet_bypass_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
-   struct virtnet_bypass_info *vbi = netdev_priv(dev);
+   struct virtnet_info *vi = netdev_priv(dev);
+   struct virtnet_bypass_info *vbi = vi->vbi;
struct net_device 

[RFC PATCH v3 2/3] virtio_net: Extend virtio to use VF datapath when available

2018-02-16 Thread Sridhar Samudrala
This patch enables virtio_net to switch over to a VF datapath when a VF
netdev is present with the same MAC address. It allows live migration
of a VM with a direct attached VF without the need to setup a bond/team
between a VF and virtio net device in the guest.

The hypervisor needs to enable only one datapath at any time so that
packets don't get looped back to the VM over the other datapath. When a VF
is plugged, the virtio datapath link state can be marked as down. The
hypervisor needs to unplug the VF device from the guest on the source host
and reset the MAC filter of the VF to initiate failover of datapath to
virtio before starting the migration. After the migration is completed,
the destination hypervisor sets the MAC filter on the VF and plugs it back
to the guest to switch over to VF datapath.

When BACKUP feature is enabled, an additional netdev(bypass netdev) is
created that acts as a master device and tracks the state of the 2 lower
netdevs. The original virtio_net netdev is marked as 'backup' netdev and a
passthru device with the same MAC is registered as 'active' netdev.

This patch is based on the discussion initiated by Jesse on this thread.
https://marc.info/?l=linux-virtualization=151189725224231=2

Signed-off-by: Sridhar Samudrala 
Signed-off-by: Alexander Duyck  
---
 drivers/net/virtio_net.c | 639 ++-
 1 file changed, 638 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index bcd13fe906ca..14679806c1b1 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -30,6 +30,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -147,6 +148,27 @@ struct receive_queue {
struct xdp_rxq_info xdp_rxq;
 };
 
+/* bypass state maintained when BACKUP feature is enabled */
+struct virtnet_bypass_info {
+   /* passthru netdev with same MAC */
+   struct net_device __rcu *active_netdev;
+
+   /* virtio_net netdev */
+   struct net_device __rcu *backup_netdev;
+
+   /* active netdev stats */
+   struct rtnl_link_stats64 active_stats;
+
+   /* backup netdev stats */
+   struct rtnl_link_stats64 backup_stats;
+
+   /* aggregated stats */
+   struct rtnl_link_stats64 bypass_stats;
+
+   /* spinlock while updating stats */
+   spinlock_t stats_lock;
+};
+
 struct virtnet_info {
struct virtio_device *vdev;
struct virtqueue *cvq;
@@ -206,6 +228,9 @@ struct virtnet_info {
u32 speed;
 
unsigned long guest_offloads;
+
+   /* upper netdev created when BACKUP feature enabled */
+   struct net_device *bypass_netdev;
 };
 
 struct padded_vnet_hdr {
@@ -2255,6 +2280,11 @@ static const struct net_device_ops virtnet_netdev = {
.ndo_features_check = passthru_features_check,
 };
 
+static bool virtnet_bypass_xmit_ready(struct net_device *dev)
+{
+   return netif_running(dev) && netif_carrier_ok(dev);
+}
+
 static void virtnet_config_changed_work(struct work_struct *work)
 {
struct virtnet_info *vi =
@@ -2647,6 +2677,601 @@ static int virtnet_validate(struct virtio_device *vdev)
return 0;
 }
 
+static void
+virtnet_bypass_child_open(struct net_device *dev,
+ struct net_device *child_netdev)
+{
+   int err = dev_open(child_netdev);
+
+   if (err)
+   netdev_warn(dev, "unable to open slave: %s: %d\n",
+   child_netdev->name, err);
+}
+
+static int virtnet_bypass_open(struct net_device *dev)
+{
+   struct virtnet_bypass_info *vbi = netdev_priv(dev);
+   struct net_device *child_netdev;
+
+   netif_carrier_off(dev);
+   netif_tx_wake_all_queues(dev);
+
+   child_netdev = rtnl_dereference(vbi->active_netdev);
+   if (child_netdev)
+   virtnet_bypass_child_open(dev, child_netdev);
+
+   child_netdev = rtnl_dereference(vbi->backup_netdev);
+   if (child_netdev)
+   virtnet_bypass_child_open(dev, child_netdev);
+
+   return 0;
+}
+
+static int virtnet_bypass_close(struct net_device *dev)
+{
+   struct virtnet_bypass_info *vi = netdev_priv(dev);
+   struct net_device *child_netdev;
+
+   netif_tx_disable(dev);
+
+   child_netdev = rtnl_dereference(vi->active_netdev);
+   if (child_netdev)
+   dev_close(child_netdev);
+
+   child_netdev = rtnl_dereference(vi->backup_netdev);
+   if (child_netdev)
+   dev_close(child_netdev);
+
+   return 0;
+}
+
+static netdev_tx_t
+virtnet_bypass_drop_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+   atomic_long_inc(>tx_dropped);
+   dev_kfree_skb_any(skb);
+   return NETDEV_TX_OK;
+}
+
+static netdev_tx_t
+virtnet_bypass_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+   struct virtnet_bypass_info *vbi = netdev_priv(dev);
+   struct net_device *xmit_dev;
+
+   /* Try 

[RFC PATCH v3 1/3] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit

2018-02-16 Thread Sridhar Samudrala
This feature bit can be used by hypervisor to indicate virtio_net device to
act as a backup for another device with the same MAC address.

VIRTIO_NET_F_BACKUP is defined as bit 62 as it is a device feature bit.

Signed-off-by: Sridhar Samudrala 
---
 drivers/net/virtio_net.c| 2 +-
 include/uapi/linux/virtio_net.h | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 626c27352ae2..bcd13fe906ca 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2920,7 +2920,7 @@ static struct virtio_device_id id_table[] = {
VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ, \
VIRTIO_NET_F_CTRL_MAC_ADDR, \
VIRTIO_NET_F_MTU, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS, \
-   VIRTIO_NET_F_SPEED_DUPLEX
+   VIRTIO_NET_F_SPEED_DUPLEX, VIRTIO_NET_F_BACKUP
 
 static unsigned int features[] = {
VIRTNET_FEATURES,
diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h
index 5de6ed37695b..c7c35fd1a5ed 100644
--- a/include/uapi/linux/virtio_net.h
+++ b/include/uapi/linux/virtio_net.h
@@ -57,6 +57,9 @@
 * Steering */
 #define VIRTIO_NET_F_CTRL_MAC_ADDR 23  /* Set MAC address */
 
+#define VIRTIO_NET_F_BACKUP  62/* Act as backup for another device
+* with the same MAC.
+*/
 #define VIRTIO_NET_F_SPEED_DUPLEX 63   /* Device set linkspeed and duplex */
 
 #ifndef VIRTIO_NET_NO_LEGACY
-- 
2.14.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[RFC PATCH v3 0/3] Enable virtio_net to act as a backup for a passthru device

2018-02-16 Thread Sridhar Samudrala
Patch 1 introduces a new feature bit VIRTIO_NET_F_BACKUP that can be
used by hypervisor to indicate that virtio_net interface should act as
a backup for another device with the same MAC address.

Ppatch 2 is in response to the community request for a 3 netdev
solution.  However, it creates some issues we'll get into in a moment.
It extends virtio_net to use alternate datapath when available and
registered. When BACKUP feature is enabled, virtio_net driver creates
an additional 'bypass' netdev that acts as a master device and controls
2 slave devices.  The original virtio_net netdev is registered as
'backup' netdev and a passthru/vf device with the same MAC gets
registered as 'active' netdev. Both 'bypass' and 'backup' netdevs are
associated with the same 'pci' device.  The user accesses the network
interface via 'bypass' netdev. The 'bypass' netdev chooses 'active' netdev
as default for transmits when it is available with link up and running.

We noticed a couple of issues with this approach during testing.
- As both 'bypass' and 'backup' netdevs are associated with the same
  virtio pci device, udev tries to rename both of them with the same name
  and the 2nd rename will fail. This would be OK as long as the first netdev
  to be renamed is the 'bypass' netdev, but the order in which udev gets
  to rename the 2 netdevs is not reliable. 
- When the 'active' netdev is unplugged OR not present on a destination
  system after live migration, the user will see 2 virtio_net netdevs.

Patch 3 refactors much of the changes made in patch 2, which was done on 
purpose just to show the solution we recommend as part of one patch set.  
If we submit a final version of this, we would combine patch 2/3 together.
This patch removes the creation of an additional netdev, Instead, it
uses a new virtnet_bypass_info struct added to the original 'backup' netdev
to track the 'bypass' information and introduces an additional set of ndo and 
ethtool ops that are used when BACKUP feature is enabled.

One difference with the 3 netdev model compared to the 2 netdev model is that
the 'bypass' netdev is created with 'noqueue' qdisc marked as 'NETIF_F_LLTX'. 
This avoids going through an additional qdisc and acquiring an additional
qdisc and tx lock during transmits.
If we can replace the qdisc of virtio netdev dynamically, it should be
possible to get these optimizations enabled even with 2 netdev model when
BACKUP feature is enabled.

As this patch series is initially focusing on usecases where hypervisor 
fully controls the VM networking and the guest is not expected to directly 
configure any hardware settings, it doesn't expose all the ndo/ethtool ops
that are supported by virtio_net at this time. To support additional usecases,
it should be possible to enable additional ops later by caching the state
in virtio netdev and replaying when the 'active' netdev gets registered. 
 
The hypervisor needs to enable only one datapath at any time so that packets
don't get looped back to the VM over the other datapath. When a VF is
plugged, the virtio datapath link state can be marked as down.
At the time of live migration, the hypervisor needs to unplug the VF device
from the guest on the source host and reset the MAC filter of the VF to
initiate failover of datapath to virtio before starting the migration. After
the migration is completed, the destination hypervisor sets the MAC filter
on the VF and plugs it back to the guest to switch over to VF datapath.

This patch is based on the discussion initiated by Jesse on this thread.
https://marc.info/?l=linux-virtualization=151189725224231=2

Sridhar Samudrala (3):
  virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
  virtio_net: Extend virtio to use VF datapath when available
  virtio_net: Enable alternate datapath without creating an additional
netdev

 drivers/net/virtio_net.c| 564 +++-
 include/uapi/linux/virtio_net.h |   3 +
 2 files changed, 563 insertions(+), 4 deletions(-)

-- 
2.14.3
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v21 1/5] xbitmap: Introduce xbitmap

2018-02-16 Thread Andy Shevchenko
On Tue, Jan 9, 2018 at 1:10 PM, Wei Wang  wrote:
> From: Matthew Wilcox 
>
> The eXtensible Bitmap is a sparse bitmap representation which is
> efficient for set bits which tend to cluster. It supports up to
> 'unsigned long' worth of bits.

>  lib/xbitmap.c| 444 
> +++

Please, split tests to a separate module.

-- 
With Best Regards,
Andy Shevchenko
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v2 0/6] crypto: engine - Permit to enqueue all async requests

2018-02-16 Thread Corentin Labbe
On Thu, Feb 15, 2018 at 11:51:00PM +0800, Herbert Xu wrote:
> On Fri, Jan 26, 2018 at 08:15:28PM +0100, Corentin Labbe wrote:
> > Hello
> > 
> > The current crypto_engine support only ahash and ablkcipher request.
> > My first patch which try to add skcipher was Nacked, it will add too many 
> > functions
> > and adding other algs(aead, asymetric_key) will make the situation worst.
> > 
> > This patchset remove all algs specific stuff and now only process generic 
> > crypto_async_request.
> > 
> > The requests handler function pointer are now moved out of struct engine and
> > are now stored directly in a crypto_engine_reqctx.
> > 
> > The original proposal of Herbert [1] cannot be done completly since the 
> > crypto_engine
> > could only dequeue crypto_async_request and it is impossible to access any 
> > request_ctx
> > without knowing the underlying request type.
> > 
> > So I do something near that was requested: adding crypto_engine_reqctx in 
> > TFM context.
> > Note that the current implementation expect that crypto_engine_reqctx
> > is the first member of the context.
> > 
> > The first patch is a try to document the crypto engine API.
> > The second patch convert the crypto engine with the new way,
> > while the following patchs convert the 4 existing users of crypto_engine.
> > Note that this split break bisection, so probably the final commit will be 
> > all merged.
> > 
> > Appart from virtio, all 4 latest patch were compile tested only.
> > But the crypto engine is tested with my new sun8i-ce driver.
> > 
> > Regards
> > 
> > [1] 
> > https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1474434.html
> > 
> > Changes since V1:
> > - renamed crypto_engine_reqctx to crypto_engine_ctx
> > - indentation fix in function parameter
> > - do not export crypto_transfer_request
> > - Add aead support
> > - crypto_finalize_request is now static
> > 
> > Changes since RFC:
> > - Added a documentation patch
> > - Added patch for stm32-cryp
> > - Changed parameter of all crypto_engine_op functions from
> > crypto_async_request to void*
> > - Reintroduced crypto_transfer_xxx_request_to_engine functions
> > 
> > Corentin Labbe (6):
> >   Documentation: crypto: document crypto engine API
> >   crypto: engine - Permit to enqueue all async requests
> >   crypto: omap: convert to new crypto engine API
> >   crypto: virtio: convert to new crypto engine API
> >   crypto: stm32-hash: convert to the new crypto engine API
> >   crypto: stm32-cryp: convert to the new crypto engine API
> 
> All applied.  Thanks.

Hello

As mentionned in the cover letter, all patchs (except documentation one) should 
be squashed.
A kbuild robot reported a build error on cryptodev due to this.

Regards
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 1/4] qxl: remove qxl_io_log()

2018-02-16 Thread Gerd Hoffmann
qxl_io_log() sends messages over to the host (qemu) for logging.
Remove the function and all callers, we can just use standard
DRM_DEBUG calls (and if needed a serial console).

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_drv.h |  3 ---
 drivers/gpu/drm/qxl/qxl_cmd.c | 34 ++
 drivers/gpu/drm/qxl/qxl_display.c | 27 ---
 drivers/gpu/drm/qxl/qxl_fb.c  |  2 --
 drivers/gpu/drm/qxl/qxl_irq.c |  3 +--
 5 files changed, 7 insertions(+), 62 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 00a1a66b05..4b89840173 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -298,9 +298,6 @@ struct qxl_device {
int monitors_config_height;
 };
 
-/* forward declaration for QXL_INFO_IO */
-__printf(2,3) void qxl_io_log(struct qxl_device *qdev, const char *fmt, ...);
-
 extern const struct drm_ioctl_desc qxl_ioctls[];
 extern int qxl_max_ioctl;
 
diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
index c0fb52c6d4..850f8d7d37 100644
--- a/drivers/gpu/drm/qxl/qxl_cmd.c
+++ b/drivers/gpu/drm/qxl/qxl_cmd.c
@@ -341,12 +341,9 @@ int qxl_io_update_area(struct qxl_device *qdev, struct 
qxl_bo *surf,
surface_height = surf->surf.height;
 
if (area->left < 0 || area->top < 0 ||
-   area->right > surface_width || area->bottom > surface_height) {
-   qxl_io_log(qdev, "%s: not doing area update for "
-  "%d, (%d,%d,%d,%d) (%d,%d)\n", __func__, surface_id, 
area->left,
-  area->top, area->right, area->bottom, surface_width, 
surface_height);
+   area->right > surface_width || area->bottom > surface_height)
return -EINVAL;
-   }
+
mutex_lock(>update_area_mutex);
qdev->ram_header->update_area = *area;
qdev->ram_header->update_surface = surface_id;
@@ -407,20 +404,6 @@ void qxl_io_memslot_add(struct qxl_device *qdev, uint8_t 
id)
wait_for_io_cmd(qdev, id, QXL_IO_MEMSLOT_ADD_ASYNC);
 }
 
-void qxl_io_log(struct qxl_device *qdev, const char *fmt, ...)
-{
-   va_list args;
-
-   va_start(args, fmt);
-   vsnprintf(qdev->ram_header->log_buf, QXL_LOG_BUF_SIZE, fmt, args);
-   va_end(args);
-   /*
-* DO not do a DRM output here - this will call printk, which will
-* call back into qxl for rendering (qxl_fb)
-*/
-   outb(0, qdev->io_base + QXL_IO_LOG);
-}
-
 void qxl_io_reset(struct qxl_device *qdev)
 {
outb(0, qdev->io_base + QXL_IO_RESET);
@@ -428,19 +411,6 @@ void qxl_io_reset(struct qxl_device *qdev)
 
 void qxl_io_monitors_config(struct qxl_device *qdev)
 {
-   qxl_io_log(qdev, "%s: %d [%dx%d+%d+%d]\n", __func__,
-  qdev->monitors_config ?
-  qdev->monitors_config->count : -1,
-  qdev->monitors_config && qdev->monitors_config->count ?
-  qdev->monitors_config->heads[0].width : -1,
-  qdev->monitors_config && qdev->monitors_config->count ?
-  qdev->monitors_config->heads[0].height : -1,
-  qdev->monitors_config && qdev->monitors_config->count ?
-  qdev->monitors_config->heads[0].x : -1,
-  qdev->monitors_config && qdev->monitors_config->count ?
-  qdev->monitors_config->heads[0].y : -1
-  );
-
wait_for_io_cmd(qdev, 0, QXL_IO_MONITORS_CONFIG_ASYNC);
 }
 
diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 9a9214ae0f..a0b6bced03 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -48,12 +48,8 @@ static void qxl_alloc_client_monitors_config(struct 
qxl_device *qdev, unsigned c
qdev->client_monitors_config = kzalloc(
sizeof(struct qxl_monitors_config) +
sizeof(struct qxl_head) * count, GFP_KERNEL);
-   if (!qdev->client_monitors_config) {
-   qxl_io_log(qdev,
-  "%s: allocation failure for %u heads\n",
-  __func__, count);
+   if (!qdev->client_monitors_config)
return;
-   }
}
qdev->client_monitors_config->count = count;
 }
@@ -74,12 +70,8 @@ static int 
qxl_display_copy_rom_client_monitors_config(struct qxl_device *qdev)
num_monitors = qdev->rom->client_monitors_config.count;
crc = crc32(0, (const uint8_t *)>rom->client_monitors_config,
  sizeof(qdev->rom->client_monitors_config));
-   if (crc != qdev->rom->client_monitors_config_crc) {
-   qxl_io_log(qdev, "crc mismatch: have %X (%zd) != %X\n", crc,
-  sizeof(qdev->rom->client_monitors_config),
-  

[PATCH 3/4] qxl: hook monitors_config updates into crtc, not encoder.

2018-02-16 Thread Gerd Hoffmann
The encoder callbacks are only called in case the video mode changes.
So any layout changes without mode changes will go unnoticed.

Add qxl_crtc_update_monitors_config(), based on the old
qxl_write_monitors_config_for_encoder() function.  Hook it into the
enable, disable and flush atomic crtc callbacks.  Remove monitors_config
updates from all other places.

Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1544322
Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_cmd.c |   2 +
 drivers/gpu/drm/qxl/qxl_display.c | 156 --
 2 files changed, 66 insertions(+), 92 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
index 850f8d7d37..95db20f214 100644
--- a/drivers/gpu/drm/qxl/qxl_cmd.c
+++ b/drivers/gpu/drm/qxl/qxl_cmd.c
@@ -371,6 +371,7 @@ void qxl_io_flush_surfaces(struct qxl_device *qdev)
 void qxl_io_destroy_primary(struct qxl_device *qdev)
 {
wait_for_io_cmd(qdev, 0, QXL_IO_DESTROY_PRIMARY_ASYNC);
+   qdev->primary_created = false;
 }
 
 void qxl_io_create_primary(struct qxl_device *qdev,
@@ -396,6 +397,7 @@ void qxl_io_create_primary(struct qxl_device *qdev,
create->type = QXL_SURF_TYPE_PRIMARY;
 
wait_for_io_cmd(qdev, 0, QXL_IO_CREATE_PRIMARY_ASYNC);
+   qdev->primary_created = true;
 }
 
 void qxl_io_memslot_add(struct qxl_device *qdev, uint8_t id)
diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index 8efd07f677..b7dac01f5e 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -281,6 +281,66 @@ static void qxl_send_monitors_config(struct qxl_device 
*qdev)
qxl_io_monitors_config(qdev);
 }
 
+static void qxl_crtc_update_monitors_config(struct drm_crtc *crtc,
+   const char *reason)
+{
+   struct drm_device *dev = crtc->dev;
+   struct qxl_device *qdev = dev->dev_private;
+   struct qxl_crtc *qcrtc = to_qxl_crtc(crtc);
+   struct qxl_head head;
+   int oldcount, i = qcrtc->index;
+
+   if (!qdev->primary_created) {
+   DRM_DEBUG_KMS("no primary surface, skip (%s)\n", reason);
+   return;
+   }
+
+   if (!qdev->monitors_config ||
+   qdev->monitors_config->max_allowed <= i)
+   return;
+
+   head.id = i;
+   head.flags = 0;
+   oldcount = qdev->monitors_config->count;
+   if (crtc->state->active) {
+   struct drm_display_mode *mode = >mode;
+   head.width = mode->hdisplay;
+   head.height = mode->vdisplay;
+   head.x = crtc->x;
+   head.y = crtc->y;
+   if (qdev->monitors_config->count < i + 1)
+   qdev->monitors_config->count = i + 1;
+   } else if (i > 0) {
+   head.width = 0;
+   head.height = 0;
+   head.x = 0;
+   head.y = 0;
+   if (qdev->monitors_config->count == i + 1)
+   qdev->monitors_config->count = i;
+   } else {
+   DRM_DEBUG_KMS("inactive head 0, skip (%s)\n", reason);
+   return;
+   }
+
+   if (head.width  == qdev->monitors_config->heads[i].width  &&
+   head.height == qdev->monitors_config->heads[i].height &&
+   head.x  == qdev->monitors_config->heads[i].x  &&
+   head.y  == qdev->monitors_config->heads[i].y  &&
+   oldcount== qdev->monitors_config->count)
+   return;
+
+   DRM_DEBUG_KMS("head %d, %dx%d, at +%d+%d, %s (%s)\n",
+ i, head.width, head.height, head.x, head.y,
+ crtc->state->active ? "on" : "off", reason);
+   if (oldcount != qdev->monitors_config->count)
+   DRM_DEBUG_KMS("active heads %d -> %d (%d total)\n",
+ oldcount, qdev->monitors_config->count,
+ qdev->monitors_config->max_allowed);
+
+   qdev->monitors_config->heads[i] = head;
+   qxl_send_monitors_config(qdev);
+}
+
 static void qxl_crtc_atomic_flush(struct drm_crtc *crtc,
  struct drm_crtc_state *old_crtc_state)
 {
@@ -296,6 +356,8 @@ static void qxl_crtc_atomic_flush(struct drm_crtc *crtc,
drm_crtc_send_vblank_event(crtc, event);
spin_unlock_irqrestore(>event_lock, flags);
}
+
+   qxl_crtc_update_monitors_config(crtc, "flush");
 }
 
 static void qxl_crtc_destroy(struct drm_crtc *crtc)
@@ -401,55 +463,20 @@ static bool qxl_crtc_mode_fixup(struct drm_crtc *crtc,
return true;
 }
 
-static void qxl_monitors_config_set(struct qxl_device *qdev,
-   int index,
-   unsigned x, unsigned y,
-   unsigned width, unsigned height,
-   unsigned surf_id)
-{
-   

[PATCH 2/4] qxl: move qxl_send_monitors_config()

2018-02-16 Thread Gerd Hoffmann
Needed to avoid a forward declaration in a followup patch.
Pure code move, no functional change.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 47 +++
 1 file changed, 23 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index a0b6bced03..8efd07f677 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -258,6 +258,29 @@ static int qxl_add_common_modes(struct drm_connector 
*connector,
return i - 1;
 }
 
+static void qxl_send_monitors_config(struct qxl_device *qdev)
+{
+   int i;
+
+   BUG_ON(!qdev->ram_header->monitors_config);
+
+   if (qdev->monitors_config->count == 0)
+   return;
+
+   for (i = 0 ; i < qdev->monitors_config->count ; ++i) {
+   struct qxl_head *head = >monitors_config->heads[i];
+
+   if (head->y > 8192 || head->x > 8192 ||
+   head->width > 8192 || head->height > 8192) {
+   DRM_ERROR("head %d wrong: %dx%d+%d+%d\n",
+ i, head->width, head->height,
+ head->x, head->y);
+   return;
+   }
+   }
+   qxl_io_monitors_config(qdev);
+}
+
 static void qxl_crtc_atomic_flush(struct drm_crtc *crtc,
  struct drm_crtc_state *old_crtc_state)
 {
@@ -378,30 +401,6 @@ static bool qxl_crtc_mode_fixup(struct drm_crtc *crtc,
return true;
 }
 
-static void
-qxl_send_monitors_config(struct qxl_device *qdev)
-{
-   int i;
-
-   BUG_ON(!qdev->ram_header->monitors_config);
-
-   if (qdev->monitors_config->count == 0)
-   return;
-
-   for (i = 0 ; i < qdev->monitors_config->count ; ++i) {
-   struct qxl_head *head = >monitors_config->heads[i];
-
-   if (head->y > 8192 || head->x > 8192 ||
-   head->width > 8192 || head->height > 8192) {
-   DRM_ERROR("head %d wrong: %dx%d+%d+%d\n",
- i, head->width, head->height,
- head->x, head->y);
-   return;
-   }
-   }
-   qxl_io_monitors_config(qdev);
-}
-
 static void qxl_monitors_config_set(struct qxl_device *qdev,
int index,
unsigned x, unsigned y,
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


[PATCH 4/4] qxl: drop dummy functions

2018-02-16 Thread Gerd Hoffmann
These days drm core checks function pointers everywhere before calling
them.  So we can drop a bunch of dummy functions now.

Signed-off-by: Gerd Hoffmann 
---
 drivers/gpu/drm/qxl/qxl_display.c | 50 ---
 1 file changed, 50 deletions(-)

diff --git a/drivers/gpu/drm/qxl/qxl_display.c 
b/drivers/gpu/drm/qxl/qxl_display.c
index b7dac01f5e..4a8c80bde5 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -456,13 +456,6 @@ qxl_framebuffer_init(struct drm_device *dev,
return 0;
 }
 
-static bool qxl_crtc_mode_fixup(struct drm_crtc *crtc,
- const struct drm_display_mode *mode,
- struct drm_display_mode *adjusted_mode)
-{
-   return true;
-}
-
 static void qxl_crtc_atomic_enable(struct drm_crtc *crtc,
   struct drm_crtc_state *old_state)
 {
@@ -476,7 +469,6 @@ static void qxl_crtc_atomic_disable(struct drm_crtc *crtc,
 }
 
 static const struct drm_crtc_helper_funcs qxl_crtc_helper_funcs = {
-   .mode_fixup = qxl_crtc_mode_fixup,
.atomic_flush = qxl_crtc_atomic_flush,
.atomic_enable = qxl_crtc_atomic_enable,
.atomic_disable = qxl_crtc_atomic_disable,
@@ -620,12 +612,6 @@ static void qxl_primary_atomic_disable(struct drm_plane 
*plane,
}
 }
 
-static int qxl_plane_atomic_check(struct drm_plane *plane,
- struct drm_plane_state *state)
-{
-   return 0;
-}
-
 static void qxl_cursor_atomic_update(struct drm_plane *plane,
 struct drm_plane_state *old_state)
 {
@@ -831,7 +817,6 @@ static const uint32_t qxl_cursor_plane_formats[] = {
 };
 
 static const struct drm_plane_helper_funcs qxl_cursor_helper_funcs = {
-   .atomic_check = qxl_plane_atomic_check,
.atomic_update = qxl_cursor_atomic_update,
.atomic_disable = qxl_cursor_atomic_disable,
.prepare_fb = qxl_plane_prepare_fb,
@@ -956,28 +941,6 @@ static int qdev_crtc_init(struct drm_device *dev, int 
crtc_id)
return r;
 }
 
-static void qxl_enc_dpms(struct drm_encoder *encoder, int mode)
-{
-   DRM_DEBUG("\n");
-}
-
-static void qxl_enc_prepare(struct drm_encoder *encoder)
-{
-   DRM_DEBUG("\n");
-}
-
-static void qxl_enc_commit(struct drm_encoder *encoder)
-{
-   DRM_DEBUG("\n");
-}
-
-static void qxl_enc_mode_set(struct drm_encoder *encoder,
-   struct drm_display_mode *mode,
-   struct drm_display_mode *adjusted_mode)
-{
-   DRM_DEBUG("\n");
-}
-
 static int qxl_conn_get_modes(struct drm_connector *connector)
 {
unsigned pwidth = 1024;
@@ -1023,10 +986,6 @@ static struct drm_encoder *qxl_best_encoder(struct 
drm_connector *connector)
 
 
 static const struct drm_encoder_helper_funcs qxl_enc_helper_funcs = {
-   .dpms = qxl_enc_dpms,
-   .prepare = qxl_enc_prepare,
-   .mode_set = qxl_enc_mode_set,
-   .commit = qxl_enc_commit,
 };
 
 static const struct drm_connector_helper_funcs qxl_connector_helper_funcs = {
@@ -1059,14 +1018,6 @@ static enum drm_connector_status qxl_conn_detect(
 : connector_status_disconnected;
 }
 
-static int qxl_conn_set_property(struct drm_connector *connector,
-  struct drm_property *property,
-  uint64_t value)
-{
-   DRM_DEBUG("\n");
-   return 0;
-}
-
 static void qxl_conn_destroy(struct drm_connector *connector)
 {
struct qxl_output *qxl_output =
@@ -1081,7 +1032,6 @@ static const struct drm_connector_funcs 
qxl_connector_funcs = {
.dpms = drm_helper_connector_dpms,
.detect = qxl_conn_detect,
.fill_modes = drm_helper_probe_single_connector_modes,
-   .set_property = qxl_conn_set_property,
.destroy = qxl_conn_destroy,
.reset = drm_atomic_helper_connector_reset,
.atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
-- 
2.9.3

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [PATCH v3 1/2] drm/virtio: Add window server support

2018-02-16 Thread Gerd Hoffmann
> > Yes.
> 
> Would it make sense for virtio-gpu to map buffers to the guest via PCI BARs?
> So we can use a single drm driver for both 2d and 3d.

Should be doable.

I'm wondering two things though:

(1) Will shmem actually help avoiding a copy?

virtio-gpu with virgl will (even if the guest doesn't use opengl) store
the resources in gpu memory.  So the VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D
copy goes from guest memory directly to gpu memory, and if we export
that as dma-buf and pass it to the wayland server it should be able to
render it without doing another copy.

How does the wl_shm_pool workflow look like inside the wayland server?
Can it ask the gpu to render directly from the pool?  Or is a copy to
gpu memory needed here?  If the latter we would effectively trade one
copy for another ...

(2) Could we handle the mapping without needing shmem?

Possibly we could extend the vgem driver.  So we pass in a iov (which
qemu gets from guest via VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING), get
back a drm object.  Which effectively creates drm objects on the host
which match the drm object in the guest (both backed by the same set of
physical pages).

cheers,
  Gerd

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization