Re: [PATCH 1/1] virtio_ring: fix return code on DMA mapping fails

2019-11-27 Thread Christoph Hellwig
On Thu, Nov 28, 2019 at 12:42:25AM +, Ashish Kalra wrote:
> Why can't we leverage CMA instead of SWIOTLB for DMA when SEV is
> enabled, CMA is well integerated with the DMA subsystem and handles
> encrypted pages when force_dma_unencrypted() returns TRUE. 
> 
> Though, CMA might face the same issues as SWIOTLB bounce buffers, it's
> size is similarly setup statically as SWIOTLB does or can be set as a 
> percentage of the available system memory.

How is CMA integrated with SEV?  CMA just gives a contiguous chunk
of memory, which still needs to be remapped as unencrypted before
returning it to the user.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [net-next V3 0/2] drivers: net: virtio_net: implement

2019-11-27 Thread Michael S. Tsirkin
On Wed, Nov 27, 2019 at 10:59:56AM -0800, David Miller wrote:
> From: "Michael S. Tsirkin" 
> Date: Wed, 27 Nov 2019 06:38:35 -0500
> 
> > On Tue, Nov 26, 2019 at 02:06:30PM -0800, David Miller wrote:
> >> 
> >> net-next is closed
> > 
> > Could you merge this early when net-next reopens though?
> > This way I don't need to keep adding drivers to update.
> 
> It simply needs to be reposted this as soon as net-next opens back up.
> 
> I fail to understand even what special treatment you want given to
> a given change, it doesn't make any sense.  We have a process for
> doing this, it's simple, it's straightforward, and is fair to
> everyone.
> 
> Thanks.

Will do, thanks.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [net-next V3 0/2] drivers: net: virtio_net: implement

2019-11-27 Thread David Miller
From: "Michael S. Tsirkin" 
Date: Wed, 27 Nov 2019 06:38:35 -0500

> On Tue, Nov 26, 2019 at 02:06:30PM -0800, David Miller wrote:
>> 
>> net-next is closed
> 
> Could you merge this early when net-next reopens though?
> This way I don't need to keep adding drivers to update.

It simply needs to be reposted this as soon as net-next opens back up.

I fail to understand even what special treatment you want given to
a given change, it doesn't make any sense.  We have a process for
doing this, it's simple, it's straightforward, and is fair to
everyone.

Thanks.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [net-next V3 2/2] drivers: net: virtio_net: Implement a dev_watchdog handler

2019-11-27 Thread Michael S. Tsirkin
On Tue, Nov 26, 2019 at 05:06:28PM -0300, Julio Faracco wrote:
> Driver virtio_net is not handling error events for TX provided by
> dev_watchdog. This event is reached when transmission queue is having
> problems to transmit packets. This could happen for any reason. To
> enable it, driver should have .ndo_tx_timeout implemented.
> 
> This commit brings back virtnet_reset method to recover TX queues from a
> error state. That function is called by schedule_work method and it puts
> the reset function into work queue.
> 
> As the error cause is unknown at this moment, it would be better to
> reset all queues, including RX (because we don't have control of this).
> 
> Signed-off-by: Julio Faracco 
> Signed-off-by: Daiane Mendes 
> Cc: Jason Wang 
> ---
>  drivers/net/virtio_net.c | 83 +++-
>  1 file changed, 82 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 4d7d5434cc5d..fbe1dfde3a4b 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -75,6 +75,7 @@ struct virtnet_sq_stats {
>   u64 xdp_tx;
>   u64 xdp_tx_drops;
>   u64 kicks;
> + u64 tx_timeouts;
>  };
>  
>  struct virtnet_rq_stats {
> @@ -98,6 +99,7 @@ static const struct virtnet_stat_desc 
> virtnet_sq_stats_desc[] = {
>   { "xdp_tx", VIRTNET_SQ_STAT(xdp_tx) },
>   { "xdp_tx_drops",   VIRTNET_SQ_STAT(xdp_tx_drops) },
>   { "kicks",  VIRTNET_SQ_STAT(kicks) },
> + { "tx_timeouts",VIRTNET_SQ_STAT(tx_timeouts) },
>  };
>  
>  static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
> @@ -211,6 +213,9 @@ struct virtnet_info {
>   /* Work struct for config space updates */
>   struct work_struct config_work;
>  
> + /* Work struct for resetting the virtio-net driver. */
> + struct work_struct reset_work;
> +
>   /* Does the affinity hint is set for virtqueues? */
>   bool affinity_hint_set;
>  
> @@ -1721,7 +1726,7 @@ static void virtnet_stats(struct net_device *dev,
>   int i;
>  
>   for (i = 0; i < vi->max_queue_pairs; i++) {
> - u64 tpackets, tbytes, rpackets, rbytes, rdrops;
> + u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops;
>   struct receive_queue *rq = >rq[i];
>   struct send_queue *sq = >sq[i];
>  
> @@ -1729,6 +1734,7 @@ static void virtnet_stats(struct net_device *dev,
>   start = u64_stats_fetch_begin_irq(>stats.syncp);
>   tpackets = sq->stats.packets;
>   tbytes   = sq->stats.bytes;
> + terrors  = sq->stats.tx_timeouts;
>   } while (u64_stats_fetch_retry_irq(>stats.syncp, start));
>  
>   do {
> @@ -1743,6 +1749,7 @@ static void virtnet_stats(struct net_device *dev,
>   tot->rx_bytes   += rbytes;
>   tot->tx_bytes   += tbytes;
>   tot->rx_dropped += rdrops;
> + tot->tx_errors  += terrors;
>   }
>  
>   tot->tx_dropped = dev->stats.tx_dropped;
> @@ -2578,6 +2585,21 @@ static int virtnet_set_features(struct net_device *dev,
>   return 0;
>  }
>  
> +static void virtnet_tx_timeout(struct net_device *dev, unsigned int txqueue)
> +{
> + struct virtnet_info *vi = netdev_priv(dev);
> + struct send_queue *sq = >sq[txqueue];
> +
> + netdev_warn(dev, "TX timeout on queue: %d, sq: %s, vq: %d, name: %s\n",
> + txqueue, sq->name, sq->vq->index, sq->vq->name);
> +
> + u64_stats_update_begin(>stats.syncp);
> + sq->stats.tx_timeouts++;
> + u64_stats_update_end(>stats.syncp);
> +
> + schedule_work(>reset_work);
> +}
> +
>  static const struct net_device_ops virtnet_netdev = {
>   .ndo_open= virtnet_open,
>   .ndo_stop= virtnet_close,
> @@ -2593,6 +2615,7 @@ static const struct net_device_ops virtnet_netdev = {
>   .ndo_features_check = passthru_features_check,
>   .ndo_get_phys_port_name = virtnet_get_phys_port_name,
>   .ndo_set_features   = virtnet_set_features,
> + .ndo_tx_timeout = virtnet_tx_timeout,
>  };
>  
>  static void virtnet_config_changed_work(struct work_struct *work)
> @@ -2982,6 +3005,62 @@ static int virtnet_validate(struct virtio_device *vdev)
>   return 0;
>  }
>  
> +static void _remove_vq_common(struct virtnet_info *vi)
> +{
> + vi->vdev->config->reset(vi->vdev);
> +
> + /* Free unused buffers in both send and recv, if any. */
> + free_unused_bufs(vi);
> +
> + _free_receive_bufs(vi);
> +
> + free_receive_page_frags(vi);
> +
> + virtnet_del_vqs(vi);
> +}
> +
> +static int _virtnet_reset(struct virtnet_info *vi)
> +{
> + struct virtio_device *vdev = vi->vdev;
> + int ret;
> +
> + virtio_config_disable(vdev);
> + vdev->failed = vdev->config->get_status(vdev) & VIRTIO_CONFIG_S_FAILED;
> +
> + virtnet_freeze_down(vdev);
> + 

Re: [net-next V3 0/2] drivers: net: virtio_net: implement

2019-11-27 Thread Michael S. Tsirkin
On Wed, Nov 27, 2019 at 06:38:39AM -0500, Michael S. Tsirkin wrote:
> On Tue, Nov 26, 2019 at 02:06:30PM -0800, David Miller wrote:
> > 
> > net-next is closed
> 
> Could you merge this early when net-next reopens though?
> This way I don't need to keep adding drivers to update.


I just mean 1/2 btw. I think 2/2 might still need some work.

> Thanks,
> 
> -- 
> MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization


Re: [net-next V3 0/2] drivers: net: virtio_net: implement

2019-11-27 Thread Michael S. Tsirkin
On Tue, Nov 26, 2019 at 02:06:30PM -0800, David Miller wrote:
> 
> net-next is closed

Could you merge this early when net-next reopens though?
This way I don't need to keep adding drivers to update.

Thanks,

-- 
MST

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization