On Wed, 01/10 08:44, Eric Blake wrote:
> On 01/10/2018 03:18 AM, Fam Zheng wrote:
> > This is a new protocol driver that exclusively opens a host NVMe
> > controller through VFIO. It achieves better latency than linux-aio by
> > completely bypassing host kernel vfs/block layer.
> >
>
> > +static
On Wed, 01/10 18:33, Stefan Hajnoczi wrote:
> > +ret = event_notifier_init(>irq_notifier, 0);
> > +if (ret) {
> > +error_setg(errp, "Failed to init event notifier");
> > +return ret;
>
> dma_map_lock should be destroyed.
CoMutexes are initialized by memset so I don't
On 10/01/2018 19:33, Stefan Hajnoczi wrote:
>> +
>> +/* Fields protected by @lock */
> Does this lock serve any purpose? I didn't see a place where these
> fields is accessed from multiple threads. Perhaps you're trying to
> prepare for multiqueue, but then other things like the
>
On Wed, Jan 10, 2018 at 05:18:40PM +0800, Fam Zheng wrote:
There are several memory and lock leaks in this patch. Please work with
Paolo to get the __attribute__((cleanup(...))) patch series merged so
this class of bugs can be eliminated:
On 10/01/2018 15:43, Eric Blake wrote:
> On 01/10/2018 03:18 AM, Fam Zheng wrote:
>> This is a new protocol driver that exclusively opens a host NVMe
>> controller through VFIO. It achieves better latency than linux-aio by
>> completely bypassing host kernel vfs/block layer.
>>
>>
On 01/10/2018 03:18 AM, Fam Zheng wrote:
> This is a new protocol driver that exclusively opens a host NVMe
> controller through VFIO. It achieves better latency than linux-aio by
> completely bypassing host kernel vfs/block layer.
>
> +static BlockDriver bdrv_nvme = {
> +.format_name
On 01/10/2018 03:18 AM, Fam Zheng wrote:
> This is a new protocol driver that exclusively opens a host NVMe
> controller through VFIO. It achieves better latency than linux-aio by
> completely bypassing host kernel vfs/block layer.
>
> $rw-$bs-$iodepth linux-aio nvme://
>