On 11/13/2010 11:05 AM, Américo Wang wrote:
On Wed, Nov 03, 2010 at 10:59:44AM -0400, Jeremy Fitzhardinge wrote:
* On PPro SMP or if we are using OOSTORE, we use a locked operation to
unlock
* (PPro errata 66, 92)
*/
-# define UNLOCK_LOCK_PREFIX LOCK_PREFIX
+static __always_inline
Hi all,
after some preliminary discussion on the QEMU mailing list, I present a
draft specification for a virtio-based SCSI host (controller, HBA, you
name it).
The virtio SCSI host is the basis of an alternative storage stack for
KVM. This stack would overcome several limitations of the current
On 06/09/2011 01:28 AM, Rusty Russell wrote:
after some preliminary discussion on the QEMU mailing list, I present a
draft specification for a virtio-based SCSI host (controller, HBA, you
name it).
OK, I'm impressed. This is very well written and it doesn't make any of
the obvious
On 06/10/2011 02:14 PM, Stefan Hajnoczi wrote:
Paolo, I'll switch the Linux guest LLD and QEMU virtio-scsi skeleton
that I have to comply with the spec. Does this sound good or did you
want to write these from scratch?
Why should I want to write things from scratch? :) Just send me again a
If requests are placed on arbitrary queues you'll inevitably run on
locking issues to ensure strict request ordering.
I would add here:
If a device uses more than one queue it is the responsibility of the
device to ensure strict request ordering.
Applied with s/device/guest/g.
Please do
On 06/12/2011 09:51 AM, Michael S. Tsirkin wrote:
If a device uses more than one queue it is the responsibility of the
device to ensure strict request ordering.
Maybe I misunderstand - how can this be the responsibility of
the device if the device does not get the information about
the
On 06/14/2011 10:39 AM, Hannes Reinecke wrote:
If, however, we decide to expose some details about the backend, we
could be using the values from the backend directly.
EG we could be forwarding the SCSI target port identifier here
(if backed by real hardware) or creating our own SAS-type
On 06/29/2011 12:03 PM, Christoph Hellwig wrote:
I agree here, in fact I misread Hannes's comment as if a driver
uses more than one queue it is responsibility of the driver to
ensure strict request ordering. If you send requests to different
queues, you know that those requests are
On 06/29/2011 11:39 AM, Stefan Hajnoczi wrote:
Of course, when doing so we would be lose the ability to freely remap
LUNs. But then remapping LUNs doesn't gain you much imho.
Plus you could always use qemu block backend here if you want
to hide the details.
And you could
On 07/01/2011 09:14 AM, Hannes Reinecke wrote:
Actually, the kernel does _not_ do a LUN remapping.
Not the kernel, the in-kernel target. The in-kernel target can and will
map hardware LUNs (target_lun in drivers/target/*) to arbitrary LUNs
(mapped_lun).
Put in another way: the virtio-scsi
Hi all,
here is the specification for a virtio-based SCSI host (controller, HBA,
you name it). The virtio SCSI host is the basis of an alternative
storage stack for KVM. This stack would overcome several limitations of
the current solution, virtio-blk:
1) scalability limitations:
Appendix H: SCSI Host Device
The virtio SCSI host device groups together one or more simple
virtual devices (ie. disk), and allows communicating to these
devices using the SCSI protocol. An instance of the device
represents a SCSI host with possibly many buses (also known as
channels or paths),
On 11/30/2011 03:17 PM, Hannes Reinecke wrote:
seg_max is the maximum number of segments that can be in a
command. A bidirectional command can include seg_max input
segments and seg_max output segments.
I would like to have the other request_queue limitations exposed
here, too.
Most
On 12/01/2011 10:52 AM, Hannes Reinecke wrote:
I would like to have the other request_queue limitations exposed
here, too.
Most notably we're missing the maximum size of an individual segment
and the maximum size of the overall I/O request.
The virtio transport does not put any limit, as far
For simplicity, instead of including the whole spec, I am just including
the diff from v1.
--- virtio-spec.txt.v1 2011-11-30 12:21:01.472479754 +0100
+++ virtio-spec.txt 2011-12-05 14:07:02.645044924 +0100
@@ -1,10 +1,9 @@
Appendix H: SCSI Host Device
-The virtio SCSI host device groups
Hi all,
here is the specification for a virtio-based SCSI host (controller, HBA,
you name it). The virtio SCSI host is the basis of an alternative
storage stack for KVM. This stack would overcome several limitations of
the current solution, virtio-blk:
1) scalability limitations:
for
the virtio device, and uses it in virtio-serial.
Cc: Amit Shah amit.s...@redhat.com
Cc: Rusty Russell ru...@rustcorp.com.au
Cc: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
Untested; what do you think? Would this patch be acceptable
Il 18/04/2012 16:21, Michael S. Tsirkin ha scritto:
@@ -1872,6 +1864,8 @@ static int virtcons_restore(struct virtio_device *vdev)
list_for_each_entry(port, portdev-ports, list) {
port-in_vq = portdev-in_vqs[port-id];
port-out_vq = portdev-out_vqs[port-id];
+
Il 18/04/2012 18:10, Michael S. Tsirkin ha scritto:
On Wed, Apr 18, 2012 at 04:34:12PM +0200, Paolo Bonzini wrote:
Il 18/04/2012 16:21, Michael S. Tsirkin ha scritto:
@@ -1872,6 +1864,8 @@ static int virtcons_restore(struct virtio_device
*vdev)
list_for_each_entry(port, portdev-ports
Il 08/05/2012 04:11, Rusty Russell ha scritto:
For virtio-scsi multiqueue support I would like to have an easy and
fast way to go from a virtqueue to the internal struct for that
queue.
It turns out that virtio-serial has the same need, but it gets
by with a simple list walk.
How bad would be it to get rid of the current -priv and use
container_of() instead? ie. have virtio_pci, virtio_mmio, lguest_bus
and s390's kvm_virtio embed the struct virtqueue?
Something like the following, compile-tested only...
The layout of vring_virtqueue gets a bit complex, with
And it's not a problem if virtqueue is exactly at start of
vring_virtqueue: we just need to allocate a bit more at start, and
offset when we free. Here's how I would do this: first apply patch
below that adds the offset parameter, then update all transports, one
patch at a time to not use
Il 20/06/2012 08:55, Cong Meng ha scritto:
This patch implements the hotplug support for virtio-scsi.
When there is a device attached/detached, the virtio-scsi driver will be
signaled via event virtual queue and it will add/remove the scsi device
in question automatically.
Signed-off-by:
Il 02/07/2012 09:20, m...@linux.vnet.ibm.com ha scritto:
+static void virtscsi_handle_event(struct work_struct *work)
+{
+struct virtio_scsi_event_node *event_node =
+container_of(work, struct virtio_scsi_event_node, work);
+struct virtio_scsi *vscsi = event_node-vscsi;
+
Il 21/06/2012 09:54, Cong Meng ha scritto:
Add two interfaces hotplug() and hot_unplug() to scsi bus info.
The embody scsi bus can implement these two interfaces to signal the HBA
driver
of guest kernel to add/remove the scsi device in question.
Signed-off-by: Cong Meng
Il 03/07/2012 07:41, Cong Meng ha scritto:
This patch implements the hotplug support for virtio-scsi.
When there is a device attached/detached, the virtio-scsi driver will be
signaled via event virtual queue and it will add/remove the scsi device
in question automatically.
v2: handle
Il 02/07/2012 08:41, Rusty Russell ha scritto:
With the same workload in guest, the guest fires 200K requests to host
with merges enabled in guest (echo 0 /sys/block/vdb/queue/nomerges),
while the guest fires 4K requests to host with merges disabled in
guest (echo 2
Il 04/07/2012 10:11, m...@linux.vnet.ibm.com ha scritto:
Signed-off-by: Cong Meng m...@linux.vnet.ibm.com
Signed-off-by: Sen Wang senw...@linux.vnet.ibm.com
The SoB lines are swapped. Otherwise looks good. Since you have to
respin, please add dropped event support too, it shouldn't be
Il 03/07/2012 16:28, Dor Laor ha scritto:
Users using a spinning disk still get IO scheduling in the host though.
What benefit is there in doing it in the guest as well?
The io scheduler waits for requests to merge and thus batch IOs
together. It's not important w.r.t spinning disks since the
. This fixes a bug
with virtio-scsi/tcm_vhost where LUN scan was not detecting LUNs.
Tested with virtio-scsi-raw + virtio-scsi/tcm_vhost w/ IBLOCK on 3.5-rc2 code.
Cc: Paolo Bonzini pbonz...@redhat.com
Cc: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
Cc: Zhi Yong Wu wu...@cn.ibm.com
Cc
Il 04/07/2012 16:02, Michael S. Tsirkin ha scritto:
On Wed, Jul 04, 2012 at 04:24:00AM +, Nicholas A. Bellinger wrote:
From: Nicholas Bellinger n...@linux-iscsi.org
Hi folks,
This series contains patches required to update tcm_vhost - virtio-scsi
connected hosts - guests to run on
Il 04/07/2012 17:42, Michael S. Tsirkin ha scritto:
On Tue, Jul 03, 2012 at 03:19:37PM +0200, Paolo Bonzini wrote:
This patch adds support for the new VIRTIO_BLK_F_CONFIG_WCE feature,
which exposes the cache mode in the configuration space and lets the
driver modify it. The cache mode
Il 04/07/2012 18:02, Michael S. Tsirkin ha scritto:
On Wed, Jul 04, 2012 at 05:54:16PM +0200, Paolo Bonzini wrote:
Il 04/07/2012 17:42, Michael S. Tsirkin ha scritto:
On Tue, Jul 03, 2012 at 03:19:37PM +0200, Paolo Bonzini wrote:
This patch adds support for the new VIRTIO_BLK_F_CONFIG_WCE
Il 04/07/2012 23:30, Michael S. Tsirkin ha scritto:
+static int virtblk_get_cache_mode(struct virtio_device *vdev)
Why are you converting u8 to int here?
The fact that it is a u8 is really an internal detail. Perhaps the bug
is using u8 in the callers.
Make it bool then?
You are using
Il 05/07/2012 09:09, Cong Meng ha scritto:
This patch implements the hotplug support for virtio-scsi.
When there is a device attached/detached, the virtio-scsi driver will be
signaled via event virtual queue and it will add/remove the scsi device
in question automatically.
v2: handle
touch Xen or KVM files as well and
the respective mailing list will usually be reached as well.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
MAINTAINERS |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 14bc707..e265f2e 100644
The old name is part of the userspace API, add it back for compatibility.
Reported-by: Sasha Levin levinsasha...@gmail.com
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
include/linux/virtio_blk.h |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/include/linux
Il 05/07/2012 15:53, Michael S. Tsirkin ha scritto:
On Thu, Jul 05, 2012 at 12:22:33PM +0200, Paolo Bonzini wrote:
Il 05/07/2012 03:52, Nicholas A. Bellinger ha scritto:
fio randrw workload | virtio-scsi-raw | virtio-scsi+tcm_vhost | bare-metal
raw block
Il 05/07/2012 16:40, Michael S. Tsirkin ha scritto:
virtio-scsi is brand new. It's not as if we've had any significant
time to make virtio-scsi-qemu faster. In fact, tcm_vhost existed
before virtio-scsi-qemu did if I understand correctly.
Yes.
Can't same can be said about virtio scsi - it
Il 06/07/2012 05:38, Nicholas A. Bellinger ha scritto:
So I imagine that setting inquiry/vpd/mode via configfs attribs to match
whatever the guest wants to see (or expects to see) can be enabled
via /sys/kernel/config/target/core/$HBA/$DEV/[wwn,attrib]/ easily to
whatever is required.
Il 12/07/2012 07:34, Zhi Yong Wu ha scritto:
HI,
Do we need to maintain one QEMU branch to collect all useful latest
patches for tcm_vhost support? You know, those patches will not get
merged into qemu.git/master.
Never say never, but the answer to your question is yes: please apply
this
Il 12/07/2012 09:23, James Bottomley ha scritto:
Cc: Paolo Bonzini pbonz...@redhat.com
Cc: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
Cc: Zhi Yong Wu wu...@cn.ibm.com
Cc: Christoph Hellwig h...@lst.de
Cc: Hannes Reinecke h...@suse.de
Cc: James Bottomley jbottom...@parallels.com
This makes some changes to the virtio-scsi event specification, so that
it is now possible to use virtio-scsi events in the implementation of
the QEMU block_resize command.
Thanks to Cong Meng for finally implementing virtio-scsi hotplug, which
made me look at block_resize again!
Paolo Bonzini
All currently defined event structs have the same fields. Simplify the
driver by enforcing this also for future structs.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
virtio-spec.lyx | 69 +++
1 file changed, 65 insertions(+), 4
, so that
the OS will see the unit attention code and react. Of course a mix of
the three is also possible, depending on how the driver writer prefers
to have his layering violations served.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
virtio-spec.lyx | 95
Il 17/07/2012 10:29, Asias He ha scritto:
So, vhost-blk at least saves ~6 syscalls for us in each request.
Are they really 6? If I/O is coalesced by a factor of 3, for example
(i.e. each exit processes 3 requests), it's really 2 syscalls per request.
Also, is there anything we can improve?
Il 17/07/2012 11:21, Asias He ha scritto:
It depends. Like vhost-scsi, vhost-blk has the problem of a crippled
feature set: no support for block device formats, non-raw protocols,
etc. This makes it different from vhost-net.
Data-plane qemu also has this cripppled feature set problem, no?
Il 17/07/2012 11:45, Michael S. Tsirkin ha scritto:
So it begs the question, is it going to be used in production, or just a
useful reference tool?
Sticking to raw already makes virtio-blk faster, doesn't it?
In that vhost-blk looks to me like just another optimization option.
Ideally I
Il 17/07/2012 12:49, Michael S. Tsirkin ha scritto:
Ok, that would make more sense. One difference between vhost-blk and
vhost-net is that for vhost-blk there are also management actions that
would trigger the switch, for example a live snapshot.
So a prerequisite for vhost-blk would be that
Il 17/07/2012 14:48, Michael S. Tsirkin ha scritto:
On Tue, Jul 17, 2012 at 01:03:39PM +0100, Stefan Hajnoczi wrote:
On Tue, Jul 17, 2012 at 12:54 PM, Michael S. Tsirkin m...@redhat.com wrote:
Knowing the answer to that is important before anyone can say whether
this approach is good or not.
Il 18/07/2012 15:42, Anthony Liguori ha scritto:
If you add support for a new command, you need to provide userspace a
way to disable this command. If you change what gets reported for VPD,
you need to provide userspace a way to make VPD look like what it did in
a previous version.
The QEMU
Il 18/07/2012 21:12, Anthony Liguori ha scritto:
Is that true for all OSes? Linux may handle things gracefully if UNMAP
starts throwing errors but that doesn't mean that Windows will.
There is so much USB crap (not just removable, think of ATA-to-USB
enclosures) that they have to deal with
Il 19/07/2012 09:28, James Bottomley ha scritto:
INQUIRY responses (at least vendor/product/type) should not change.
INQUIRY responses often change for arrays because a firmware upgrade
enables new features and new features have to declare themselves,
usually in the INQUIRY data. What you
to achieve, simply fall back
to notifying the host notifier manually from qemu if KVM mode is
disabled.
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
Cc: Anthony Liguori aligu...@us.ibm.com
Cc: Paolo Bonzini pbonz...@redhat.com
Signed-off-by: Nicholas Bellinger n...@linux-iscsi.org
Il 25/07/2012 00:33, Nicholas A. Bellinger ha scritto:
+int event_notifier_notify(EventNotifier *e)
+{
+uint64_t value = 1;
+int r;
+
+assert(event_notifier_valid(e));
+r = write(e-fd, value, sizeof(value));
+if (r 0) {
+return -errno;
+}
+assert(r
...@linux.vnet.ibm.com
Cc: Anthony Liguori aligu...@us.ibm.com
Cc: Paolo Bonzini pbonz...@redhat.com
Cc: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Nicholas Bellinger n...@linux-iscsi.org
---
configure| 10 +++
hw/Makefile.objs |1 +
hw/qdev-properties.c | 32
, which means the
virtqueues have been set up by the guest.
(v2: Squash virtio-scsi: use the vhost-scsi host device from stefan)
Signed-off-by: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
Signed-off-by: Zhi Yong Wu wu...@linux.vnet.ibm.com
Cc: Michael S. Tsirkin m...@redhat.com
Cc: Paolo
Il 25/07/2012 09:01, Paolo Bonzini ha scritto:
From: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
This patch starts and stops vhost as the virtio device transitions
through its status phases. Vhost can only be started once the guest
reports its driver has successfully initialized, which
...@linux.vnet.ibm.com
Cc: Michael S. Tsirkin m...@redhat.com
Cc: Paolo Bonzini pbonz...@redhat.com
Signed-off-by: Nicholas Bellinger n...@linux-iscsi.org
---
hw/virtio-scsi.c | 12
1 files changed, 12 insertions(+), 0 deletions(-)
diff --git a/hw/virtio-scsi.c b/hw/virtio
Il 05/07/2012 13:40, Sasha Levin ha scritto:
@@ -275,7 +274,7 @@ static void vm_del_vq(struct virtqueue *vq)
vring_del_virtqueue(vq);
/* Select and deactivate the queue */
- writel(info-queue_index, vm_dev-base + VIRTIO_MMIO_QUEUE_SEL);
+
, which needs to be rebased.
LUNs above 255 now work for all of scanning, hotplug, hotunplug and
resize.
Thanks,
Paolo
Paolo Bonzini (2):
virtio-scsi: fix LUNs greater than 255
virtio-scsi: support online resizing of disks
drivers/scsi/virtio_scsi.c | 37
with the flat format.
Cc: sta...@vger.kernel.org
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
drivers/scsi/virtio_scsi.c |6 +-
1 files changed, 5 insertions(+), 1 deletions(-)
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index c7030fb..8b6b927 100644
--- a/drivers
capacity change from 22548578304 to 23622320128
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
drivers/scsi/virtio_scsi.c | 31 ++-
include/linux/virtio_scsi.h |2 ++
2 files changed, 32 insertions(+), 1 deletions(-)
diff --git a/drivers/scsi/virtio_scsi.c b
Il 05/07/2012 12:29, Jason Wang ha scritto:
Sometimes, virtio device need to configure irq affiniry hint to maximize the
performance. Instead of just exposing the irq of a virtqueue, this patch
introduce an API to set the affinity for a virtqueue.
The api is best-effort, the affinity hint
I'm not sure what the correct behavior for bio cacheflush is, if
any.
REQ_FLUSH is not supported in the bio path.
Ouch, that's correct:
@@ -414,7 +529,7 @@ static void virtblk_update_cache_mode(struct virtio_device
*vdev)
u8 writeback = virtblk_get_cache_mode(vdev);
Il 29/07/2012 22:40, Michael S. Tsirkin ha scritto:
Did you set the affinity manually in your experiments, or perhaps there
is a difference between scsi and networking... (interrupt mitigation?)
You need to run irqbalancer in guest to make it actually work. Do you?
Yes, of course, now on
Il 30/07/2012 06:43, Asias He ha scritto:
Yes. Something like this:
qemu -drive file=foo.img,cache=writeback/unsafe
is not safe against power losses also?
cache=writeback and cache=none are safe, cache=unsafe isn't.
I think we can add REQ_FLUSH REQ_FUA support to bio path and that
Il 31/07/2012 22:52, Eric Northup ha scritto:
It seems to me like this is not the way that virtio devices are supposed
to behave - if a guest splits a virtio_scsi_cmd_req or _resp across a
page boundary, then this code won't work.
Buffers can cover several pages. Of course, data buffers have
Il 05/07/2012 12:29, Jason Wang ha scritto:
Sometimes, virtio device need to configure irq affiniry hint to maximize the
performance. Instead of just exposing the irq of a virtqueue, this patch
introduce an API to set the affinity for a virtqueue.
The api is best-effort, the affinity hint
Il 30/07/2012 08:27, Paolo Bonzini ha scritto:
Did you set the affinity manually in your experiments, or perhaps
there
is a difference between scsi and networking... (interrupt mitigation?)
You need to run irqbalancer in guest to make it actually work. Do you?
Yes, of course, now
: Paolo Bonzini pbonz...@redhat.com
Cc: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
Cc: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Nicholas Bellinger n...@linux-iscsi.org
This is a bugfix we need even without vhost, right?
I believe so, as it appears to be stomping past the end
for VHOST_SCSI_GET_ABI_VERSION ioctl (aliguori + nab)
Cc: Stefan Hajnoczi stefa...@linux.vnet.ibm.com
Cc: Zhi Yong Wu wu...@linux.vnet.ibm.com
Cc: Anthony Liguori aligu...@us.ibm.com
Cc: Paolo Bonzini pbonz...@redhat.com
Cc: Michael S. Tsirkin m...@redhat.com
Signed-off-by: Nicholas Bellinger n...@linux
...@linux.vnet.ibm.com
Cc: Zhi Yong Wu wu...@linux.vnet.ibm.com
Cc: Michael S. Tsirkin m...@redhat.com
Cc: Paolo Bonzini pbonz...@redhat.com
Signed-off-by: Nicholas Bellinger n...@linux-iscsi.org
---
hw/virtio-pci.c |1 +
hw/virtio-scsi.c | 48
hw/virtio
Il 20/08/2012 13:57, Michael S. Tsirkin ha scritto:
How much of the functionality of virtio-scsi.[ch] is still in use at
this point? Would it make more sense to use a separate vhost-scsi-pci
device instead?
Since the SCSI target lives in the kernel, almost everything is driven
Il 20/08/2012 11:03, Cong Meng ha scritto:
Each virtio scsi HBA has global request queue limits. But the passthrough
LUNs (scsi-generic) come from different host HBAs may have different request
queue limits. If the guest sends commands that exceed the host limits, the
commands will be rejected
Il 20/08/2012 16:44, McPacino ha scritto:
在 2012-8-20 下午9:18,Paolo Bonzini pbonz...@redhat.com
mailto:pbonz...@redhat.com写道:
Il 20/08/2012 11:03, Cong Meng ha scritto:
Each virtio scsi HBA has global request queue limits. But the
passthrough
LUNs (scsi-generic) come from different host
Il 21/08/2012 10:23, Cong Meng ha scritto:
+static void sg_get_queue_limits(BlockDriverState *bs, const char *filename)
+{
+DIR *ffs;
+struct dirent *d;
+char path[MAXPATHLEN];
+
+snprintf(path, MAXPATHLEN,
+ /sys/class/scsi_generic/sg%s/device/block/,
+
Il 21/08/2012 11:52, Stefan Hajnoczi ha scritto:
Using /sys/dev/block or /sys/dev/char seems easier, and lets you
retrieve the parameters for block devices too.
what do you mean with block devices? Using /dev/sda instead of
/dev/sg0?
Yes.
However, I'm worried of the consequences
Il 22/08/2012 13:04, Cong Meng ha scritto:
Cong, what is the limit that the host HBA enforces (and what is the
HBA)? What commands see a problem? Is it fixed by using scsi-block
instead of scsi-generic (if you can use scsi-block at all, i.e. it's not
a tape or similar device)?
I don't see
Il 22/08/2012 15:13, Stefan Hajnoczi ha scritto:
http://lists.gnu.org/archive/html/qemu-devel/2010-12/msg01741.html
This is a real problem in practice. IE. the USB CD-ROM on this POWER7
blade limits transfers to 0x1e000 bytes for example and the Linux sr
driver on the guest is going to try
Il 23/08/2012 11:31, Cong Meng ha scritto:
For disks, this should be fixed simply by using scsi-block instead of
scsi-generic.
CD-ROMs are indeed more complicated because burning CDs cannot be done
with syscalls. :/
So, as the problem exist to CD-ROM, I will continue to get these patches
Il 23/08/2012 12:08, Stefan Hajnoczi ha scritto:
I'm still trying to understand the extent of the problem.
The problem occurs for _USB_ CD-ROMs according to Ben. Passthrough of
USB storage devices should be done via USB passthrough, not virtio-scsi.
If we do USB passthrough via the SCSI
Il 24/08/2012 02:45, Nicholas A. Bellinger ha scritto:
So up until very recently, TCM would accept an I/O request for an DATA
I/O type CDB with a max_sectors larger than the reported max_sectors for
it's TCM backend (regardless of backend type), and silently generate N
backend 'tasks' to
Il 24/08/2012 12:43, Hannes Reinecke ha scritto:
Hehe. So finally someone else stumbled across this one.
All is fine and dandy as long as you're able to use scsi-disk.
As soon as you're forced to use scsi-generic we're in trouble.
With scsi-generic we actually have two problems:
1)
?id=37.
Alternatively you can just set the affinity manually in /proc.
Rusty, can you please give your Acked-by to the first two patches?
Jason Wang (2):
virtio-ring: move queue_index to vring_virtqueue
virtio: introduce an API to set affinity for a virtqueue
Paolo Bonzini (3):
virtio-scsi
jasow...@redhat.com
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
I fixed the problems in Jason's v5 (posted at
http://permalink.gmane.org/gmane.linux.kernel.virtualization/15910)
and switched from virtio_set_queue_index to a new argument of
vring_new_virtqueue
the affinity OR over all affinities
requested
Signed-off-by: Jason Wang jasow...@redhat.com
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
drivers/virtio/virtio_pci.c | 46 +
include/linux/virtio_config.h | 21 ++
2 files changed, 67
paths.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
drivers/scsi/virtio_scsi.c | 23 +++
1 files changed, 15 insertions(+), 8 deletions(-)
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
index 595af1a..62fec04 100644
--- a/drivers/scsi/virtio_scsi.c
This will be needed soon in order to retrieve the per-target
struct.
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
---
drivers/scsi/virtio_scsi.c | 17 +
1 files changed, 9 insertions(+), 8 deletions(-)
diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c
, so the driver expects the number of request queues to be
equal to the number of VCPUs. This makes it easy and fast to select
the queue, and also lets the driver optimize the IRQ affinity for the
virtqueues (each virtqueue's affinity is set to the CPU that owns
the queue).
Signed-off-by: Paolo
Il 28/08/2012 16:07, Sasha Levin ha scritto:
- num_targets = sh-max_id;
- for (i = 0; i num_targets; i++) {
- kfree(vscsi-tgt[i]);
- vscsi-tgt[i] = NULL;
+ if (vscsi-tgt) {
+ num_targets = sh-max_id;
+ for (i = 0; i num_targets; i++) {
+
Il 30/08/2012 16:53, Michael S. Tsirkin ha scritto:
this series adds multiqueue support to the virtio-scsi driver, based
on Jason Wang's work on virtio-net. It uses a simple queue steering
algorithm that expects one queue per CPU. LUNs in the same target always
use the same queue (so
Il 02/07/2012 02:29, Rusty Russell ha scritto:
VIRTIO_BALLOON_F_MUST_TELL_HOST
implies you should tell the host (eventually). I don't know if any
implementations actually care though.
This is indeed broken, because it is a negative feature: it tells you
that implicit deflate is _not_
Il 04/09/2012 04:21, Nicholas A. Bellinger ha scritto:
@@ -112,6 +118,9 @@ static void virtscsi_complete_cmd(struct virtio_scsi
*vscsi, void *buf)
struct virtio_scsi_cmd *cmd = buf;
struct scsi_cmnd *sc = cmd-sc;
struct virtio_scsi_cmd_resp *resp = cmd-resp.cmd;
+struct
Il 04/09/2012 10:46, Michael S. Tsirkin ha scritto:
+static int virtscsi_queuecommand_multi(struct Scsi_Host *sh,
+ struct scsi_cmnd *sc)
+{
+ struct virtio_scsi *vscsi = shost_priv(sh);
+ struct virtio_scsi_target_state *tgt = vscsi-tgt[sc-device-id];
+
Il 04/09/2012 13:09, Michael S. Tsirkin ha scritto:
queuecommand on CPU #0 queuecommand #2 on CPU #1
--
atomic_inc_return(...) == 1
atomic_inc_return(...) == 2
Il 04/09/2012 15:35, Michael S. Tsirkin ha scritto:
I see. I guess you can rewrite this as:
atomic_inc
if (atomic_read() == 1)
which is a bit cheaper, and make the fact
that you do not need increment and return to be atomic,
explicit.
It seems more complicated to me for hardly any reason.
affinity is set to the CPU that owns
the queue).
Signed-off-by: Paolo Bonzini pbonz...@redhat.com
I guess an alternative is a per-target vq.
Is the reason you avoid this that you expect more targets
than cpus? If yes this is something you might want to
mention in the log.
One reason
Il 04/09/2012 16:19, Michael S. Tsirkin ha scritto:
Also - some kind of comment explaining why a similar race can not happen
with this lock in place would be nice: I see why this specific race can
not trigger but since lock is dropped later before you submit command, I
have hard time
Il 04/09/2012 16:21, Michael S. Tsirkin ha scritto:
One reason is that, even though in practice I expect roughly the same
number of targets and VCPUs, hotplug means the number of targets is
difficult to predict and is usually fixed to 256.
The other reason is that per-target vq didn't
1 - 100 of 524 matches
Mail list logo