,scsi_target_id=2,scsi_lun_id=0
$data/control
echo -n 1 $data/enable
ln -s $data $lun
Thanks,
Badari
On 05/29/2013 10:36 PM, Nicholas A. Bellinger wrote:
On Wed, 2013-05-29 at 21:29 -0700, Nicholas A. Bellinger wrote:
On Thu, 2013-05-30 at 06:17 +0800, Asias He wrote:
On Wed, May 29, 2013 at 08:10:44AM -0700, Badari Pulavarty wrote:
On 05/29/2013 02:05 AM, Wenchao Xia wrote:
于 2013-5-28 17
104858112 bytes, see the invocation script below)
I see the same. For some reason fdisk -l in the VM shows
512-bytes more than the actual size for the file (on the host).
Thanks,
Badari
On 05/23/2013 06:32 AM, Stefan Hajnoczi wrote:
On Thu, May 23, 2013 at 11:48 AM, Gleb Natapov g...@redhat.com wrote:
On Thu, May 23, 2013 at 08:53:55AM +0800, Asias He wrote:
On Wed, May 22, 2013 at 05:36:08PM -0700, Badari wrote:
Hi,
While testing vhost-scsi in the current qemu git, ran
On 05/23/2013 07:58 AM, Paolo Bonzini wrote:
Il 23/05/2013 16:48, Badari Pulavarty ha scritto:
The common virtio-scsi code in QEMU should guard against this. In
virtio-blk data plane I hit a similar case and ended up starting the
data plane thread (equivalent to vhost here) *before* the status
On 05/23/2013 08:30 AM, Paolo Bonzini wrote:
Il 23/05/2013 17:27, Asias He ha scritto:
On Thu, May 23, 2013 at 04:58:05PM +0200, Paolo Bonzini wrote:
Il 23/05/2013 16:48, Badari Pulavarty ha scritto:
The common virtio-scsi code in QEMU should guard against this. In
virtio-blk data plane I
On 05/23/2013 09:19 AM, Paolo Bonzini wrote:
Il 23/05/2013 18:11, Badari Pulavarty ha scritto:
On 05/23/2013 08:30 AM, Paolo Bonzini wrote:
Il 23/05/2013 17:27, Asias He ha scritto:
On Thu, May 23, 2013 at 04:58:05PM +0200, Paolo Bonzini wrote:
Il 23/05/2013 16:48, Badari Pulavarty ha
Hi,
While testing vhost-scsi in the current qemu git, ran into an earlier issue
with seabios. I had to disable scsi support in seabios to get it working.
I was hoping this issue got resolved when vhost-scsi support got
merged into qemu. Is this still being worked on ?
Thanks,
Badari
[root
are
limited and Stefan is more appropriate person to deal with QEMU :)
As mentioned earlier, we don't see performance with any temporary
solutions suggested so far (RPC table slot + ASYNC) comes anywhere
close to readv/writev support patch or linearized QEMU.
Thanks,
Badari
On Fri, 2011-04-15 at 13:09 -0500, Anthony Liguori wrote:
On 04/15/2011 11:23 AM, Badari Pulavarty wrote:
On Fri, 2011-04-15 at 17:34 +0200, Christoph Hellwig wrote:
On Fri, Apr 15, 2011 at 04:26:41PM +0100, Stefan Hajnoczi wrote:
On Fri, Apr 15, 2011 at 4:05 PM, Christoph Hellwigh
On 4/15/2011 10:29 AM, Christoph Hellwig wrote:
On Fri, Apr 15, 2011 at 09:23:54AM -0700, Badari Pulavarty wrote:
True. That brings up a different question - whether we are doing
enough testing on mainline QEMU :(
It seems you're clearly not doing enough testing on any qemu. Even
On 4/15/2011 4:00 PM, Anthony Liguori wrote:
On 04/15/2011 05:21 PM, pbad...@linux.vnet.ibm.com wrote:
On 4/15/2011 10:29 AM, Christoph Hellwig wrote:
On Fri, Apr 15, 2011 at 09:23:54AM -0700, Badari Pulavarty wrote:
True. That brings up a different question - whether we are doing
enough
will be posting
in next few days.
Comments ?
Todo:
- Address hch's comments on annontations
- Implement per device read/write queues
- Finish up error handling
Thanks,
Badari
---
drivers/vhost/blk.c | 445
1 file changed, 445 insertions
Michael S. Tsirkin wrote:
On Tue, Mar 23, 2010 at 12:55:07PM -0700, Badari Pulavarty wrote:
Michael S. Tsirkin wrote:
On Tue, Mar 23, 2010 at 10:57:33AM -0700, Badari Pulavarty wrote:
Michael S. Tsirkin wrote:
On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari
sequential IO write
tests with vhost-blk compared to virtio-blk.
# time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct
I get ~110MB/sec with virtio-blk, but I get only ~60MB/sec with
vhost-blk. Wondering why ?
Comments/flames ?
Thanks,
Badari
vhost-blk is in-kernel accelerator for virtio-blk
Michael S. Tsirkin wrote:
On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote:
Write Results:
==
I see degraded IO performance when doing sequential IO write
tests with vhost-blk compared to virtio-blk.
# time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct
I get
Michael S. Tsirkin wrote:
On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote:
Write Results:
==
I see degraded IO performance when doing sequential IO write
tests with vhost-blk compared to virtio-blk.
# time dd of=/dev/vda if=/dev/zero bs=2M oflag=direct
I get
Michael S. Tsirkin wrote:
On Tue, Mar 23, 2010 at 10:57:33AM -0700, Badari Pulavarty wrote:
Michael S. Tsirkin wrote:
On Mon, Mar 22, 2010 at 05:34:04PM -0700, Badari Pulavarty wrote:
Write Results:
==
I see degraded IO performance when doing sequential IO write
18 matches
Mail list logo