Re: [RFC] virtio: Use DMA MAP API for devices without an IOMMU
On Thu, 2018-04-05 at 21:34 +0300, Michael S. Tsirkin wrote: > > In this specific case, because that would make qemu expect an iommu, > > and there isn't one. > > > I think that you can set iommu_platform in qemu without an iommu. No I mean the platform has one but it's not desirable for it to be used due to the performance hit. Cheers, Ben. > > > Anshuman, you need to provide more background here. I don't have time > > right now it's late, but explain about the fact that this is for a > > specific type of secure VM which has only a limited pool of (insecure) > > memory that can be shared with qemu, so all IOs need to bounce via that > > pool, which can be achieved by using swiotlb. > > > > Note: this isn't urgent, we can discuss alternative approaches, this is > > just to start the conversation. > > > > Cheers, > > Ben. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[RFC PATCH net-next v5 1/4] virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit
This feature bit can be used by hypervisor to indicate virtio_net device to act as a backup for another device with the same MAC address. VIRTIO_NET_F_BACKUP is defined as bit 62 as it is a device feature bit. Signed-off-by: Sridhar Samudrala--- drivers/net/virtio_net.c| 2 +- include/uapi/linux/virtio_net.h | 3 +++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 7b187ec7411e..befb5944f3fd 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -2962,7 +2962,7 @@ static struct virtio_device_id id_table[] = { VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ, \ VIRTIO_NET_F_CTRL_MAC_ADDR, \ VIRTIO_NET_F_MTU, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS, \ - VIRTIO_NET_F_SPEED_DUPLEX + VIRTIO_NET_F_SPEED_DUPLEX, VIRTIO_NET_F_BACKUP static unsigned int features[] = { VIRTNET_FEATURES, diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h index 5de6ed37695b..c7c35fd1a5ed 100644 --- a/include/uapi/linux/virtio_net.h +++ b/include/uapi/linux/virtio_net.h @@ -57,6 +57,9 @@ * Steering */ #define VIRTIO_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */ +#define VIRTIO_NET_F_BACKUP 62/* Act as backup for another device +* with the same MAC. +*/ #define VIRTIO_NET_F_SPEED_DUPLEX 63 /* Device set linkspeed and duplex */ #ifndef VIRTIO_NET_NO_LEGACY -- 2.14.3 ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[RFC PATCH net-next v5 3/4] virtio_net: Extend virtio to use VF datapath when available
This patch enables virtio_net to switch over to a VF datapath when a VF netdev is present with the same MAC address. It allows live migration of a VM with a direct attached VF without the need to setup a bond/team between a VF and virtio net device in the guest. The hypervisor needs to enable only one datapath at any time so that packets don't get looped back to the VM over the other datapath. When a VF is plugged, the virtio datapath link state can be marked as down. The hypervisor needs to unplug the VF device from the guest on the source host and reset the MAC filter of the VF to initiate failover of datapath to virtio before starting the migration. After the migration is completed, the destination hypervisor sets the MAC filter on the VF and plugs it back to the guest to switch over to VF datapath. When BACKUP feature is enabled, an additional netdev(bypass netdev) is created that acts as a master device and tracks the state of the 2 lower netdevs. The original virtio_net netdev is marked as 'backup' netdev and a passthru device with the same MAC is registered as 'active' netdev. This patch is based on the discussion initiated by Jesse on this thread. https://marc.info/?l=linux-virtualization=151189725224231=2 Signed-off-by: Sridhar Samudrala--- drivers/net/Kconfig | 1 + drivers/net/virtio_net.c | 612 ++- 2 files changed, 612 insertions(+), 1 deletion(-) diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 891846655000..9e2cf61fd1c1 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -331,6 +331,7 @@ config VETH config VIRTIO_NET tristate "Virtio network driver" depends on VIRTIO + depends on MAY_USE_BYPASS ---help--- This is the virtual network driver for virtio. It can be used with QEMU based VMMs (like KVM or Xen). Say Y or M. diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index befb5944f3fd..86b2f8f2947d 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -30,8 +30,11 @@ #include #include #include +#include +#include #include #include +#include static int napi_weight = NAPI_POLL_WEIGHT; module_param(napi_weight, int, 0444); @@ -206,6 +209,9 @@ struct virtnet_info { u32 speed; unsigned long guest_offloads; + + /* upper netdev created when BACKUP feature enabled */ + struct net_device __rcu *bypass_netdev; }; struct padded_vnet_hdr { @@ -2275,6 +2281,22 @@ static int virtnet_xdp(struct net_device *dev, struct netdev_bpf *xdp) } } +static int virtnet_get_phys_port_name(struct net_device *dev, char *buf, + size_t len) +{ + struct virtnet_info *vi = netdev_priv(dev); + int ret; + + if (!virtio_has_feature(vi->vdev, VIRTIO_NET_F_BACKUP)) + return -EOPNOTSUPP; + + ret = snprintf(buf, len, "_bkup"); + if (ret >= len) + return -EOPNOTSUPP; + + return 0; +} + static const struct net_device_ops virtnet_netdev = { .ndo_open= virtnet_open, .ndo_stop= virtnet_close, @@ -2292,6 +2314,7 @@ static const struct net_device_ops virtnet_netdev = { .ndo_xdp_xmit = virtnet_xdp_xmit, .ndo_xdp_flush = virtnet_xdp_flush, .ndo_features_check = passthru_features_check, + .ndo_get_phys_port_name = virtnet_get_phys_port_name, }; static void virtnet_config_changed_work(struct work_struct *work) @@ -2689,6 +2712,576 @@ static int virtnet_validate(struct virtio_device *vdev) return 0; } +/* START of functions supporting VIRTIO_NET_F_BACKUP feature. + * When BACKUP feature is enabled, an additional netdev(bypass netdev) + * is created that acts as a master device and tracks the state of the + * 2 lower netdevs. The original virtio_net netdev is registered as + * 'backup' netdev and a passthru device with the same MAC is registered + * as 'active' netdev. + */ + +/* bypass state maintained when BACKUP feature is enabled */ +struct virtnet_bypass_info { + /* passthru netdev with same MAC */ + struct net_device __rcu *active_netdev; + + /* virtio_net netdev */ + struct net_device __rcu *backup_netdev; + + /* active netdev stats */ + struct rtnl_link_stats64 active_stats; + + /* backup netdev stats */ + struct rtnl_link_stats64 backup_stats; + + /* aggregated stats */ + struct rtnl_link_stats64 bypass_stats; + + /* spinlock while updating stats */ + spinlock_t stats_lock; +}; + +static int virtnet_bypass_open(struct net_device *dev) +{ + struct virtnet_bypass_info *vbi = netdev_priv(dev); + struct net_device *active_netdev, *backup_netdev; + int err; + + netif_carrier_off(dev); + netif_tx_wake_all_queues(dev); + + active_netdev = rtnl_dereference(vbi->active_netdev);
[RFC PATCH net-next v5 2/4] net: Introduce generic bypass module
This provides a generic interface for paravirtual drivers to listen for netdev register/unregister/link change events from pci ethernet devices with the same MAC and takeover their datapath. The notifier and event handling code is based on the existing netvsc implementation. A paravirtual driver can use this module by registering a set of ops and each instance of the device when it is probed. Signed-off-by: Sridhar Samudrala--- include/net/bypass.h | 80 ++ net/Kconfig | 18 +++ net/core/Makefile| 1 + net/core/bypass.c| 406 +++ 4 files changed, 505 insertions(+) create mode 100644 include/net/bypass.h create mode 100644 net/core/bypass.c diff --git a/include/net/bypass.h b/include/net/bypass.h new file mode 100644 index ..e2dd122f951a --- /dev/null +++ b/include/net/bypass.h @@ -0,0 +1,80 @@ +// SPDX-License-Identifier: GPL-2.0 +/* Copyright (c) 2018, Intel Corporation. */ + +#ifndef _NET_BYPASS_H +#define _NET_BYPASS_H + +#include + +struct bypass_ops { + int (*register_child)(struct net_device *bypass_netdev, + struct net_device *child_netdev); + int (*join_child)(struct net_device *bypass_netdev, + struct net_device *child_netdev); + int (*unregister_child)(struct net_device *bypass_netdev, + struct net_device *child_netdev); + int (*release_child)(struct net_device *bypass_netdev, +struct net_device *child_netdev); + int (*update_link)(struct net_device *bypass_netdev, + struct net_device *child_netdev); + rx_handler_result_t (*handle_frame)(struct sk_buff **pskb); +}; + +struct bypass_instance { + struct list_head list; + struct net_device __rcu *bypass_netdev; + struct bypass *bypass; +}; + +struct bypass { + struct list_head list; + const struct bypass_ops *ops; + const struct net_device_ops *netdev_ops; + struct list_head instance_list; + struct mutex lock; +}; + +#if IS_ENABLED(CONFIG_NET_BYPASS) + +struct bypass *bypass_register_driver(const struct bypass_ops *ops, + const struct net_device_ops *netdev_ops); +void bypass_unregister_driver(struct bypass *bypass); + +int bypass_register_instance(struct bypass *bypass, struct net_device *dev); +int bypass_unregister_instance(struct bypass *bypass, struct net_device *dev); + +int bypass_unregister_child(struct net_device *child_netdev); + +#else + +static inline +struct bypass *bypass_register_driver(const struct bypass_ops *ops, + const struct net_device_ops *netdev_ops) +{ + return NULL; +} + +static inline void bypass_unregister_driver(struct bypass *bypass) +{ +} + +static inline int bypass_register_instance(struct bypass *bypass, + struct net_device *dev) +{ + return 0; +} + +static inline int bypass_unregister_instance(struct bypass *bypass, +struct net_device *dev) +{ + return 0; +} + +static inline int bypass_unregister_child(struct net_device *child_netdev) +{ + return 0; +} + +#endif + +#endif /* _NET_BYPASS_H */ diff --git a/net/Kconfig b/net/Kconfig index 0428f12c25c2..994445f4a96a 100644 --- a/net/Kconfig +++ b/net/Kconfig @@ -423,6 +423,24 @@ config MAY_USE_DEVLINK on MAY_USE_DEVLINK to ensure they do not cause link errors when devlink is a loadable module and the driver using it is built-in. +config NET_BYPASS + tristate "Bypass interface" + ---help--- + This provides a generic interface for paravirtual drivers to listen + for netdev register/unregister/link change events from pci ethernet + devices with the same MAC and takeover their datapath. This also + enables live migration of a VM with direct attached VF by failing + over to the paravirtual datapath when the VF is unplugged. + +config MAY_USE_BYPASS + tristate + default m if NET_BYPASS=m + default y if NET_BYPASS=y || NET_BYPASS=n + help + Drivers using the bypass infrastructure should have a dependency + on MAY_USE_BYPASS to ensure they do not cause link errors when + bypass is a loadable module and the driver using it is built-in. + endif # if NET # Used by archs to tell that they support BPF JIT compiler plus which flavour. diff --git a/net/core/Makefile b/net/core/Makefile index 6dbbba8c57ae..a9727ed1c8fc 100644 --- a/net/core/Makefile +++ b/net/core/Makefile @@ -30,3 +30,4 @@ obj-$(CONFIG_DST_CACHE) += dst_cache.o obj-$(CONFIG_HWBM) += hwbm.o obj-$(CONFIG_NET_DEVLINK) += devlink.o obj-$(CONFIG_GRO_CELLS) += gro_cells.o +obj-$(CONFIG_NET_BYPASS) += bypass.o diff --git a/net/core/bypass.c b/net/core/bypass.c new file
[RFC PATCH net-next v5 0/4] Enable virtio_net to act as a backup for a passthru device
The main motivation for this patch is to enable cloud service providers to provide an accelerated datapath to virtio-net enabled VMs in a transparent manner with no/minimal guest userspace changes. This also enables hypervisor controlled live migration to be supported with VMs that have direct attached SR-IOV VF devices. Patch 1 introduces a new feature bit VIRTIO_NET_F_BACKUP that can be used by hypervisor to indicate that virtio_net interface should act as a backup for another device with the same MAC address. Patch 2 introduces a bypass module that provides a generic interface for paravirtual drivers to listen for netdev register/unregister/link change events from pci ethernet devices with the same MAC and takeover their datapath. The notifier and event handling code is based on the existing netvsc implementation. A paravirtual driver can use this module by registering a set of ops and each instance of the device when it is probed. Patch 3 extends virtio_net to use alternate datapath when available and registered. When BACKUP feature is enabled, virtio_net driver creates an additional 'bypass' netdev that acts as a master device and controls 2 slave devices. The original virtio_net netdev is registered as 'backup' netdev and a passthru/vf device with the same MAC gets registered as 'active' netdev. Both 'bypass' and 'backup' netdevs are associated with the same 'pci' device. The user accesses the network interface via 'bypass' netdev. The 'bypass' netdev chooses 'active' netdev as default for transmits when it is available with link up and running. Patch 4 refactors netvsc to use the registration/notification framework supported by bypass module. As this patch series is initially focusing on usecases where hypervisor fully controls the VM networking and the guest is not expected to directly configure any hardware settings, it doesn't expose all the ndo/ethtool ops that are supported by virtio_net at this time. To support additional usecases, it should be possible to enable additional ops later by caching the state in virtio netdev and replaying when the 'active' netdev gets registered. The hypervisor needs to enable only one datapath at any time so that packets don't get looped back to the VM over the other datapath. When a VF is plugged, the virtio datapath link state can be marked as down. At the time of live migration, the hypervisor needs to unplug the VF device from the guest on the source host and reset the MAC filter of the VF to initiate failover of datapath to virtio before starting the migration. After the migration is completed, the destination hypervisor sets the MAC filter on the VF and plugs it back to the guest to switch over to VF datapath. This patch is based on the discussion initiated by Jesse on this thread. https://marc.info/?l=linux-virtualization=151189725224231=2 v5 RFC: Based on Jiri's comments, moved the common functionality to a 'bypass' module so that the same notifier and event handlers to handle child register/unregister/link change events can be shared between virtio_net and netvsc. Improved error handling based on Siwei's comments. v4: - Based on the review comments on the v3 version of the RFC patch and Jakub's suggestion for the naming issue with 3 netdev solution, proposed 3 netdev in-driver bonding solution for virtio-net. v3 RFC: - Introduced 3 netdev model and pointed out a couple of issues with that model and proposed 2 netdev model to avoid these issues. - Removed broadcast/multicast optimization and only use virtio as backup path when VF is unplugged. v2 RFC: - Changed VIRTIO_NET_F_MASTER to VIRTIO_NET_F_BACKUP (mst) - made a small change to the virtio-net xmit path to only use VF datapath for unicasts. Broadcasts/multicasts use virtio datapath. This avoids east-west broadcasts to go over the PCI link. - added suppport for the feature bit in qemu Sridhar Samudrala (4): virtio_net: Introduce VIRTIO_NET_F_BACKUP feature bit net: Introduce generic bypass module virtio_net: Extend virtio to use VF datapath when available netvsc: refactor notifier/event handling code to use the bypass framework drivers/net/Kconfig | 1 + drivers/net/hyperv/Kconfig | 1 + drivers/net/hyperv/netvsc_drv.c | 219 -- drivers/net/virtio_net.c| 614 +++- include/net/bypass.h| 80 ++ include/uapi/linux/virtio_net.h | 3 + net/Kconfig | 18 ++ net/core/Makefile | 1 + net/core/bypass.c | 406 ++ 9 files changed, 1184 insertions(+), 159 deletions(-) create mode 100644 include/net/bypass.h create mode 100644 net/core/bypass.c -- 2.14.3 ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC] virtio: Use DMA MAP API for devices without an IOMMU
On Fri, Apr 06, 2018 at 01:09:43AM +1000, Benjamin Herrenschmidt wrote: > On Thu, 2018-04-05 at 17:54 +0300, Michael S. Tsirkin wrote: > > On Thu, Apr 05, 2018 at 08:09:30PM +0530, Anshuman Khandual wrote: > > > On 04/05/2018 04:26 PM, Anshuman Khandual wrote: > > > > There are certian platforms which would like to use SWIOTLB based DMA > > > > API > > > > for bouncing purpose without actually requiring an IOMMU back end. But > > > > the > > > > virtio core does not allow such mechanism. Right now DMA MAP API is only > > > > selected for devices which have an IOMMU and then the QEMU/host back end > > > > will process all incoming SG buffer addresses as IOVA instead of simple > > > > GPA which is the case for simple bounce buffers after being processed > > > > with > > > > SWIOTLB API. To enable this usage, it introduces an architecture > > > > specific > > > > function which will just make virtio core front end select DMA > > > > operations > > > > structure. > > > > > > > > Signed-off-by: Anshuman Khandual> > > > > > + "Michael S. Tsirkin" > > > > I'm confused by this. > > > > static bool vring_use_dma_api(struct virtio_device *vdev) > > { > > if (!virtio_has_iommu_quirk(vdev)) > > return true; > > > > > > Why doesn't setting VIRTIO_F_IOMMU_PLATFORM on the > > hypervisor side sufficient? > > In this specific case, because that would make qemu expect an iommu, > and there isn't one. I think that you can set iommu_platform in qemu without an iommu. > Anshuman, you need to provide more background here. I don't have time > right now it's late, but explain about the fact that this is for a > specific type of secure VM which has only a limited pool of (insecure) > memory that can be shared with qemu, so all IOs need to bounce via that > pool, which can be achieved by using swiotlb. > > Note: this isn't urgent, we can discuss alternative approaches, this is > just to start the conversation. > > Cheers, > Ben. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH v30 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
On Thu, Apr 05, 2018 at 03:47:28PM +, Wang, Wei W wrote: > On Thursday, April 5, 2018 10:04 PM, Michael S. Tsirkin wrote: > > On Thu, Apr 05, 2018 at 02:05:03AM +, Wang, Wei W wrote: > > > On Thursday, April 5, 2018 9:12 AM, Michael S. Tsirkin wrote: > > > > On Thu, Apr 05, 2018 at 12:30:27AM +, Wang, Wei W wrote: > > > > > On Wednesday, April 4, 2018 10:08 PM, Michael S. Tsirkin wrote: > > > > > > On Wed, Apr 04, 2018 at 10:07:51AM +0800, Wei Wang wrote: > > > > > > > On 04/04/2018 02:47 AM, Michael S. Tsirkin wrote: > > > > > > > > On Wed, Apr 04, 2018 at 12:10:03AM +0800, Wei Wang wrote: > > > > > I'm afraid the driver couldn't be aware if the added hints are > > > > > stale or not, > > > > > > > > > > > > No - I mean that driver has code that compares two values and stops > > > > reporting. Can one of the values be stale? > > > > > > The driver compares "vb->cmd_id_use != vb->cmd_id_received" to decide > > > if it needs to stop reporting hints, and cmd_id_received is what the > > > driver reads from host (host notifies the driver to read for the > > > latest value). If host sends a new cmd id, it will notify the guest to > > > read again. I'm not sure how that could be a stale cmd id (or maybe I > > > misunderstood your point here?) > > > > > > Best, > > > Wei > > > > The comparison is done in one thread, the update in another one. > > I think this isn't something that could be solved by adding a lock, > unless host waits for the driver's ACK about finishing the update > (this is not agreed in the QEMU part discussion). > > Actually virtio_balloon has F_IOMMU_PLATFORM disabled, maybe we don't > need to worry about that using DMA api case (we only have gpa added to > the vq, and having some entries stay in the vq seems fine). For this > feature, I think it would not work with F_IOMMU enabled either. Adding a code comment explaining all this might be a good idea. > If there is any further need (I couldn't think of a need so far), I > think we could consider to let host inject a vq interrupt at some > point, and then the driver handler can do the virtqueue_get_buf work. > > Best, > Wei ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [Intel-gfx] [PATCH 08/13] drm/virtio: Stop updating plane->crtc
On Thu, Apr 05, 2018 at 06:13:55PM +0300, Ville Syrjala wrote: > From: Ville Syrjälä> > We want to get rid of plane->crtc on atomic drivers. Stop setting it. > > v2: s/fb/crtc/ in the commit message (Gerd) > > Cc: David Airlie > Cc: Gerd Hoffmann > Cc: virtualization@lists.linux-foundation.org > Signed-off-by: Ville Syrjälä > Reviewed-by: Maarten Lankhorst Reviewed-by: Daniel Vetter > --- > drivers/gpu/drm/virtio/virtgpu_display.c | 2 -- > 1 file changed, 2 deletions(-) > > diff --git a/drivers/gpu/drm/virtio/virtgpu_display.c > b/drivers/gpu/drm/virtio/virtgpu_display.c > index 8cc8c34d67f5..42e842ceb53c 100644 > --- a/drivers/gpu/drm/virtio/virtgpu_display.c > +++ b/drivers/gpu/drm/virtio/virtgpu_display.c > @@ -302,8 +302,6 @@ static int vgdev_output_init(struct virtio_gpu_device > *vgdev, int index) > drm_crtc_init_with_planes(dev, crtc, primary, cursor, > _gpu_crtc_funcs, NULL); > drm_crtc_helper_add(crtc, _gpu_crtc_helper_funcs); > - primary->crtc = crtc; > - cursor->crtc = crtc; > > drm_connector_init(dev, connector, _gpu_connector_funcs, > DRM_MODE_CONNECTOR_VIRTUAL); > -- > 2.16.1 > > ___ > Intel-gfx mailing list > intel-...@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/intel-gfx -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
RE: [PATCH v30 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
On Thursday, April 5, 2018 10:04 PM, Michael S. Tsirkin wrote: > On Thu, Apr 05, 2018 at 02:05:03AM +, Wang, Wei W wrote: > > On Thursday, April 5, 2018 9:12 AM, Michael S. Tsirkin wrote: > > > On Thu, Apr 05, 2018 at 12:30:27AM +, Wang, Wei W wrote: > > > > On Wednesday, April 4, 2018 10:08 PM, Michael S. Tsirkin wrote: > > > > > On Wed, Apr 04, 2018 at 10:07:51AM +0800, Wei Wang wrote: > > > > > > On 04/04/2018 02:47 AM, Michael S. Tsirkin wrote: > > > > > > > On Wed, Apr 04, 2018 at 12:10:03AM +0800, Wei Wang wrote: > > > > I'm afraid the driver couldn't be aware if the added hints are > > > > stale or not, > > > > > > > > > No - I mean that driver has code that compares two values and stops > > > reporting. Can one of the values be stale? > > > > The driver compares "vb->cmd_id_use != vb->cmd_id_received" to decide > > if it needs to stop reporting hints, and cmd_id_received is what the > > driver reads from host (host notifies the driver to read for the > > latest value). If host sends a new cmd id, it will notify the guest to > > read again. I'm not sure how that could be a stale cmd id (or maybe I > > misunderstood your point here?) > > > > Best, > > Wei > > The comparison is done in one thread, the update in another one. I think this isn't something that could be solved by adding a lock, unless host waits for the driver's ACK about finishing the update (this is not agreed in the QEMU part discussion). Actually virtio_balloon has F_IOMMU_PLATFORM disabled, maybe we don't need to worry about that using DMA api case (we only have gpa added to the vq, and having some entries stay in the vq seems fine). For this feature, I think it would not work with F_IOMMU enabled either. If there is any further need (I couldn't think of a need so far), I think we could consider to let host inject a vq interrupt at some point, and then the driver handler can do the virtqueue_get_buf work. Best, Wei ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC] virtio: Use DMA MAP API for devices without an IOMMU
On Thu, 2018-04-05 at 17:54 +0300, Michael S. Tsirkin wrote: > On Thu, Apr 05, 2018 at 08:09:30PM +0530, Anshuman Khandual wrote: > > On 04/05/2018 04:26 PM, Anshuman Khandual wrote: > > > There are certian platforms which would like to use SWIOTLB based DMA API > > > for bouncing purpose without actually requiring an IOMMU back end. But the > > > virtio core does not allow such mechanism. Right now DMA MAP API is only > > > selected for devices which have an IOMMU and then the QEMU/host back end > > > will process all incoming SG buffer addresses as IOVA instead of simple > > > GPA which is the case for simple bounce buffers after being processed with > > > SWIOTLB API. To enable this usage, it introduces an architecture specific > > > function which will just make virtio core front end select DMA operations > > > structure. > > > > > > Signed-off-by: Anshuman Khandual> > > > + "Michael S. Tsirkin" > > I'm confused by this. > > static bool vring_use_dma_api(struct virtio_device *vdev) > { > if (!virtio_has_iommu_quirk(vdev)) > return true; > > > Why doesn't setting VIRTIO_F_IOMMU_PLATFORM on the > hypervisor side sufficient? In this specific case, because that would make qemu expect an iommu, and there isn't one. Anshuman, you need to provide more background here. I don't have time right now it's late, but explain about the fact that this is for a specific type of secure VM which has only a limited pool of (insecure) memory that can be shared with qemu, so all IOs need to bounce via that pool, which can be achieved by using swiotlb. Note: this isn't urgent, we can discuss alternative approaches, this is just to start the conversation. Cheers, Ben. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [virtio-dev] Re: [RFC PATCH 1/3] qemu: virtio-bypass should explicitly bind to a passthrough device
On 04/04/2018 10:02, Siwei Liu wrote: >> pci_bus_num is almost always a bug if not done within >> a context of a PCI host, bridge, etc. >> >> In particular this will not DTRT if run before guest assigns bus >> numbers. >> > I was seeking means to reserve a specific pci bus slot from drivers, > and update the driver when guest assigns the bus number but it seems > there's no low-hanging fruits. Because of that reason the bus_num is > only obtained until it's really needed (during get_config) and I > assume at that point the pci bus assignment is already done. I know > the current one is not perfect, but we need that information (PCI > bus:slot.func number) to name the guest device correctly. Can you use the -device "id", and look it up as devices = container_get(qdev_get_machine(), "/peripheral"); return object_resolve_path_component(devices, id); ? Thanks, Paolo ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC] virtio: Use DMA MAP API for devices without an IOMMU
On Thu, Apr 05, 2018 at 08:09:30PM +0530, Anshuman Khandual wrote: > On 04/05/2018 04:26 PM, Anshuman Khandual wrote: > > There are certian platforms which would like to use SWIOTLB based DMA API > > for bouncing purpose without actually requiring an IOMMU back end. But the > > virtio core does not allow such mechanism. Right now DMA MAP API is only > > selected for devices which have an IOMMU and then the QEMU/host back end > > will process all incoming SG buffer addresses as IOVA instead of simple > > GPA which is the case for simple bounce buffers after being processed with > > SWIOTLB API. To enable this usage, it introduces an architecture specific > > function which will just make virtio core front end select DMA operations > > structure. > > > > Signed-off-by: Anshuman Khandual> > + "Michael S. Tsirkin" I'm confused by this. static bool vring_use_dma_api(struct virtio_device *vdev) { if (!virtio_has_iommu_quirk(vdev)) return true; Why doesn't setting VIRTIO_F_IOMMU_PLATFORM on the hypervisor side sufficient? ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC] virtio: Use DMA MAP API for devices without an IOMMU
On 04/05/2018 04:26 PM, Anshuman Khandual wrote: > There are certian platforms which would like to use SWIOTLB based DMA API > for bouncing purpose without actually requiring an IOMMU back end. But the > virtio core does not allow such mechanism. Right now DMA MAP API is only > selected for devices which have an IOMMU and then the QEMU/host back end > will process all incoming SG buffer addresses as IOVA instead of simple > GPA which is the case for simple bounce buffers after being processed with > SWIOTLB API. To enable this usage, it introduces an architecture specific > function which will just make virtio core front end select DMA operations > structure. > > Signed-off-by: Anshuman Khandual+ "Michael S. Tsirkin" ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC PATCH 0/2] use larger max_request_size for virtio_blk
On 4/5/18 4:09 AM, Weiping Zhang wrote: > Hi, > > For virtio block device, actually there is no a hard limit for max request > size, and virtio_blk driver set -1 to blk_queue_max_hw_sectors(q, -1U);. > But it doesn't work, because there is a default upper limitation > BLK_DEF_MAX_SECTORS (1280 sectors). So this series want to add a new helper > blk_queue_max_hw_sectors_no_limit to set a proper max reqeust size. > > Weiping Zhang (2): > blk-setting: add new helper blk_queue_max_hw_sectors_no_limit > virtio_blk: add new module parameter to set max request size > > block/blk-settings.c | 20 > drivers/block/virtio_blk.c | 32 ++-- > include/linux/blkdev.h | 2 ++ > 3 files changed, 52 insertions(+), 2 deletions(-) The driver should just use blk_queue_max_hw_sectors() to set the limit, and then the soft limit can be modified by a udev rule. Technically the driver doesn't own the software limit, it's imposed to ensure that we don't introduce too much latency per request. Your situation is no different from many other setups, where the hw limit is much higher than the default 1280k. -- Jens Axboe ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH v30 2/4] virtio-balloon: VIRTIO_BALLOON_F_FREE_PAGE_HINT
On Thu, Apr 05, 2018 at 02:05:03AM +, Wang, Wei W wrote: > On Thursday, April 5, 2018 9:12 AM, Michael S. Tsirkin wrote: > > On Thu, Apr 05, 2018 at 12:30:27AM +, Wang, Wei W wrote: > > > On Wednesday, April 4, 2018 10:08 PM, Michael S. Tsirkin wrote: > > > > On Wed, Apr 04, 2018 at 10:07:51AM +0800, Wei Wang wrote: > > > > > On 04/04/2018 02:47 AM, Michael S. Tsirkin wrote: > > > > > > On Wed, Apr 04, 2018 at 12:10:03AM +0800, Wei Wang wrote: > > > I'm afraid the driver couldn't be aware if the added hints are stale > > > or not, > > > > > > No - I mean that driver has code that compares two values and stops > > reporting. Can one of the values be stale? > > The driver compares "vb->cmd_id_use != vb->cmd_id_received" to decide if it > needs to stop reporting hints, and cmd_id_received is what the driver reads > from host (host notifies the driver to read for the latest value). If host > sends a new cmd id, it will notify the guest to read again. I'm not sure how > that could be a stale cmd id (or maybe I misunderstood your point here?) > > Best, > Wei The comparison is done in one thread, the update in another one. -- MST ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC PATCH 0/2] use larger max_request_size for virtio_blk
Weiping, > For virtio block device, actually there is no a hard limit for max > request size, and virtio_blk driver set -1 to > blk_queue_max_hw_sectors(q, -1U);. But it doesn't work, because there > is a default upper limitation BLK_DEF_MAX_SECTORS (1280 sectors). That's intentional (although it's an ongoing debate what the actual value should be). > So this series want to add a new helper > blk_queue_max_hw_sectors_no_limit to set a proper max reqeust size. BLK_DEF_MAX_SECTORS is a kernel default empirically chosen to strike a decent balance between I/O latency and bandwidth. It sets an upper bound for filesystem requests only. Regardless of the capabilities of the block device driver and underlying hardware. You can override the limit on a per-device basis via max_sectors_kb in sysfs. People generally do it via a udev rule. -- Martin K. Petersen Oracle Linux Engineering ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC] virtio: Use DMA MAP API for devices without an IOMMU
On 04/05/2018 04:44 PM, Balbir Singh wrote: > On Thu, Apr 5, 2018 at 8:56 PM, Anshuman Khandual >wrote: >> There are certian platforms which would like to use SWIOTLB based DMA API >> for bouncing purpose without actually requiring an IOMMU back end. But the >> virtio core does not allow such mechanism. Right now DMA MAP API is only >> selected for devices which have an IOMMU and then the QEMU/host back end >> will process all incoming SG buffer addresses as IOVA instead of simple >> GPA which is the case for simple bounce buffers after being processed with >> SWIOTLB API. To enable this usage, it introduces an architecture specific >> function which will just make virtio core front end select DMA operations >> structure. >> >> Signed-off-by: Anshuman Khandual >> --- >> This RFC is just to get some feedback. Please ignore the function call >> back into the architecture. It can be worked out properly later on. But >> the question is can we have virtio devices in the guest which would like >> to use SWIOTLB based (or any custom DMA API based) bounce buffering with >> out actually being an IOMMU devices emulated by QEMU/host as been with >> the current VIRTIO_F_IOMMU_PLATFORM virtio flag ? >> >> arch/powerpc/platforms/pseries/iommu.c | 6 ++ >> drivers/virtio/virtio_ring.c | 4 >> include/linux/virtio.h | 2 ++ >> 3 files changed, 12 insertions(+) >> >> diff --git a/arch/powerpc/platforms/pseries/iommu.c >> b/arch/powerpc/platforms/pseries/iommu.c >> index 06f02960b439..dd15fbddbe89 100644 >> --- a/arch/powerpc/platforms/pseries/iommu.c >> +++ b/arch/powerpc/platforms/pseries/iommu.c >> @@ -1396,3 +1396,9 @@ static int __init disable_multitce(char *str) >> __setup("multitce=", disable_multitce); >> >> machine_subsys_initcall_sync(pseries, tce_iommu_bus_notifier_init); >> + >> +bool is_virtio_dma_platform(void) >> +{ >> + return true; >> +} >> +EXPORT_SYMBOL(is_virtio_dma_platform); >> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c >> index 71458f493cf8..9f205a79d378 100644 >> --- a/drivers/virtio/virtio_ring.c >> +++ b/drivers/virtio/virtio_ring.c >> @@ -144,6 +144,10 @@ struct vring_virtqueue { >> >> static bool vring_use_dma_api(struct virtio_device *vdev) >> { >> + /* Use DMA API even for virtio devices without an IOMMU */ >> + if (is_virtio_dma_platform()) >> + return true; >> + >> if (!virtio_has_iommu_quirk(vdev)) >> return true; >> >> diff --git a/include/linux/virtio.h b/include/linux/virtio.h >> index 988c7355bc22..d8bb83d753ea 100644 >> --- a/include/linux/virtio.h >> +++ b/include/linux/virtio.h >> @@ -200,6 +200,8 @@ static inline struct virtio_driver *drv_to_virtio(struct >> device_driver *drv) >> int register_virtio_driver(struct virtio_driver *drv); >> void unregister_virtio_driver(struct virtio_driver *drv); >> >> +extern bool is_virtio_dma_platform(void); >> + > > Where is the default implementation for non-pseries platforms? Will they > compile > after these changes? No they wont. This is just a RFC asking for suggestion/feedback on a particular direction, will clean up the code later on once we agree on this. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC] virtio: Use DMA MAP API for devices without an IOMMU
On Thu, Apr 5, 2018 at 8:56 PM, Anshuman Khandualwrote: > There are certian platforms which would like to use SWIOTLB based DMA API > for bouncing purpose without actually requiring an IOMMU back end. But the > virtio core does not allow such mechanism. Right now DMA MAP API is only > selected for devices which have an IOMMU and then the QEMU/host back end > will process all incoming SG buffer addresses as IOVA instead of simple > GPA which is the case for simple bounce buffers after being processed with > SWIOTLB API. To enable this usage, it introduces an architecture specific > function which will just make virtio core front end select DMA operations > structure. > > Signed-off-by: Anshuman Khandual > --- > This RFC is just to get some feedback. Please ignore the function call > back into the architecture. It can be worked out properly later on. But > the question is can we have virtio devices in the guest which would like > to use SWIOTLB based (or any custom DMA API based) bounce buffering with > out actually being an IOMMU devices emulated by QEMU/host as been with > the current VIRTIO_F_IOMMU_PLATFORM virtio flag ? > > arch/powerpc/platforms/pseries/iommu.c | 6 ++ > drivers/virtio/virtio_ring.c | 4 > include/linux/virtio.h | 2 ++ > 3 files changed, 12 insertions(+) > > diff --git a/arch/powerpc/platforms/pseries/iommu.c > b/arch/powerpc/platforms/pseries/iommu.c > index 06f02960b439..dd15fbddbe89 100644 > --- a/arch/powerpc/platforms/pseries/iommu.c > +++ b/arch/powerpc/platforms/pseries/iommu.c > @@ -1396,3 +1396,9 @@ static int __init disable_multitce(char *str) > __setup("multitce=", disable_multitce); > > machine_subsys_initcall_sync(pseries, tce_iommu_bus_notifier_init); > + > +bool is_virtio_dma_platform(void) > +{ > + return true; > +} > +EXPORT_SYMBOL(is_virtio_dma_platform); > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > index 71458f493cf8..9f205a79d378 100644 > --- a/drivers/virtio/virtio_ring.c > +++ b/drivers/virtio/virtio_ring.c > @@ -144,6 +144,10 @@ struct vring_virtqueue { > > static bool vring_use_dma_api(struct virtio_device *vdev) > { > + /* Use DMA API even for virtio devices without an IOMMU */ > + if (is_virtio_dma_platform()) > + return true; > + > if (!virtio_has_iommu_quirk(vdev)) > return true; > > diff --git a/include/linux/virtio.h b/include/linux/virtio.h > index 988c7355bc22..d8bb83d753ea 100644 > --- a/include/linux/virtio.h > +++ b/include/linux/virtio.h > @@ -200,6 +200,8 @@ static inline struct virtio_driver *drv_to_virtio(struct > device_driver *drv) > int register_virtio_driver(struct virtio_driver *drv); > void unregister_virtio_driver(struct virtio_driver *drv); > > +extern bool is_virtio_dma_platform(void); > + Where is the default implementation for non-pseries platforms? Will they compile after these changes? Balbir ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[RFC] virtio: Use DMA MAP API for devices without an IOMMU
There are certian platforms which would like to use SWIOTLB based DMA API for bouncing purpose without actually requiring an IOMMU back end. But the virtio core does not allow such mechanism. Right now DMA MAP API is only selected for devices which have an IOMMU and then the QEMU/host back end will process all incoming SG buffer addresses as IOVA instead of simple GPA which is the case for simple bounce buffers after being processed with SWIOTLB API. To enable this usage, it introduces an architecture specific function which will just make virtio core front end select DMA operations structure. Signed-off-by: Anshuman Khandual--- This RFC is just to get some feedback. Please ignore the function call back into the architecture. It can be worked out properly later on. But the question is can we have virtio devices in the guest which would like to use SWIOTLB based (or any custom DMA API based) bounce buffering with out actually being an IOMMU devices emulated by QEMU/host as been with the current VIRTIO_F_IOMMU_PLATFORM virtio flag ? arch/powerpc/platforms/pseries/iommu.c | 6 ++ drivers/virtio/virtio_ring.c | 4 include/linux/virtio.h | 2 ++ 3 files changed, 12 insertions(+) diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c index 06f02960b439..dd15fbddbe89 100644 --- a/arch/powerpc/platforms/pseries/iommu.c +++ b/arch/powerpc/platforms/pseries/iommu.c @@ -1396,3 +1396,9 @@ static int __init disable_multitce(char *str) __setup("multitce=", disable_multitce); machine_subsys_initcall_sync(pseries, tce_iommu_bus_notifier_init); + +bool is_virtio_dma_platform(void) +{ + return true; +} +EXPORT_SYMBOL(is_virtio_dma_platform); diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 71458f493cf8..9f205a79d378 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -144,6 +144,10 @@ struct vring_virtqueue { static bool vring_use_dma_api(struct virtio_device *vdev) { + /* Use DMA API even for virtio devices without an IOMMU */ + if (is_virtio_dma_platform()) + return true; + if (!virtio_has_iommu_quirk(vdev)) return true; diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 988c7355bc22..d8bb83d753ea 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -200,6 +200,8 @@ static inline struct virtio_driver *drv_to_virtio(struct device_driver *drv) int register_virtio_driver(struct virtio_driver *drv); void unregister_virtio_driver(struct virtio_driver *drv); +extern bool is_virtio_dma_platform(void); + /* module_virtio_driver() - Helper macro for drivers that don't do * anything special in module init/exit. This eliminates a lot of * boilerplate. Each module may only use this macro once, and -- 2.14.1 ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[RFC PATCH 0/2] use larger max_request_size for virtio_blk
Hi, For virtio block device, actually there is no a hard limit for max request size, and virtio_blk driver set -1 to blk_queue_max_hw_sectors(q, -1U);. But it doesn't work, because there is a default upper limitation BLK_DEF_MAX_SECTORS (1280 sectors). So this series want to add a new helper blk_queue_max_hw_sectors_no_limit to set a proper max reqeust size. Weiping Zhang (2): blk-setting: add new helper blk_queue_max_hw_sectors_no_limit virtio_blk: add new module parameter to set max request size block/blk-settings.c | 20 drivers/block/virtio_blk.c | 32 ++-- include/linux/blkdev.h | 2 ++ 3 files changed, 52 insertions(+), 2 deletions(-) -- 2.9.4 ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[RFC PATCH 1/2] blk-setting: add new helper blk_queue_max_hw_sectors_no_limit
There is a default upper limitation BLK_DEF_MAX_SECTORS, but for some virtual block device driver there is no such limitation. So add a new help to set max request size. Signed-off-by: Weiping Zhang--- block/blk-settings.c | 20 include/linux/blkdev.h | 2 ++ 2 files changed, 22 insertions(+) diff --git a/block/blk-settings.c b/block/blk-settings.c index 48ebe6b..685c30c 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -253,6 +253,26 @@ void blk_queue_max_hw_sectors(struct request_queue *q, unsigned int max_hw_secto } EXPORT_SYMBOL(blk_queue_max_hw_sectors); +/* same as blk_queue_max_hw_sectors but without default upper limitation */ +void blk_queue_max_hw_sectors_no_limit(struct request_queue *q, + unsigned int max_hw_sectors) +{ + struct queue_limits *limits = >limits; + unsigned int max_sectors; + + if ((max_hw_sectors << 9) < PAGE_SIZE) { + max_hw_sectors = 1 << (PAGE_SHIFT - 9); + printk(KERN_INFO "%s: set to minimum %d\n", + __func__, max_hw_sectors); + } + + limits->max_hw_sectors = max_hw_sectors; + max_sectors = min_not_zero(max_hw_sectors, limits->max_dev_sectors); + limits->max_sectors = max_sectors; + q->backing_dev_info->io_pages = max_sectors >> (PAGE_SHIFT - 9); +} +EXPORT_SYMBOL(blk_queue_max_hw_sectors_no_limit); + /** * blk_queue_chunk_sectors - set size of the chunk for this queue * @q: the request queue for the device diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index ed63f3b..2250709 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1243,6 +1243,8 @@ extern void blk_cleanup_queue(struct request_queue *); extern void blk_queue_make_request(struct request_queue *, make_request_fn *); extern void blk_queue_bounce_limit(struct request_queue *, u64); extern void blk_queue_max_hw_sectors(struct request_queue *, unsigned int); +extern void blk_queue_max_hw_sectors_no_limit(struct request_queue *, + unsigned int); extern void blk_queue_chunk_sectors(struct request_queue *, unsigned int); extern void blk_queue_max_segments(struct request_queue *, unsigned short); extern void blk_queue_max_discard_segments(struct request_queue *, -- 2.9.4 ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[RFC PATCH 2/2] virtio_blk: add new module parameter to set max request size
Actually there is no upper limitation, so add new module parameter to provide a way to set a proper max request size for virtio block. Using a larger request size can improve sequence performance in theory, and reduce the interaction between guest and hypervisor. Signed-off-by: Weiping Zhang--- drivers/block/virtio_blk.c | 32 ++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 4a07593c..5ac6d59 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -64,6 +64,34 @@ struct virtblk_req { struct scatterlist sg[]; }; + +static int max_request_size_set(const char *val, const struct kernel_param *kp); + +static const struct kernel_param_ops max_request_size_ops = { + .set = max_request_size_set, + .get = param_get_uint, +}; + +static unsigned long max_request_size = 4096; /* in unit of KiB */ +module_param_cb(max_request_size, _request_size_ops, _request_size, + 0444); +MODULE_PARM_DESC(max_request_size, "set max request size, in unit of KiB"); + +static int max_request_size_set(const char *val, const struct kernel_param *kp) +{ + int ret; + unsigned int size_kb, page_kb = 1 << (PAGE_SHIFT - 10); + + ret = kstrtouint(val, 10, _kb); + if (ret != 0) + return -EINVAL; + + if (size_kb < page_kb) + return -EINVAL; + + return param_set_uint(val, kp); +} + static inline blk_status_t virtblk_result(struct virtblk_req *vbr) { switch (vbr->status) { @@ -730,8 +758,8 @@ static int virtblk_probe(struct virtio_device *vdev) /* We can handle whatever the host told us to handle. */ blk_queue_max_segments(q, vblk->sg_elems-2); - /* No real sector limit. */ - blk_queue_max_hw_sectors(q, -1U); + /* No real sector limit, use 512b (max_request_size << 10) >> 9 */ + blk_queue_max_hw_sectors_no_limit(q, max_request_size << 1); /* Host can optionally specify maximum segment size and number of * segments. */ -- 2.9.4 ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization