NBD can become contended on its single connection. We have to serialize all
writes and we can only process one read response at a time. Fix this by
allowing userspace to provide multiple connections to a single nbd device. This
coupled with block-mq drastically increases performance in
On Thu, Oct 06, 2016 at 06:16:30AM -0700, Christoph Hellwig wrote:
> On Thu, Oct 06, 2016 at 03:09:49PM +0200, Wouter Verhelst wrote:
> > Okay, I've updated the proto.md file then, to clarify that in the case
> > of multiple connections, a client MUST NOT send a flush request until it
> > has seen
On Thu, Oct 06, 2016 at 03:09:49PM +0200, Wouter Verhelst wrote:
> Okay, I've updated the proto.md file then, to clarify that in the case
> of multiple connections, a client MUST NOT send a flush request until it
> has seen the replies to the write requests that it cares about. That
> should be
On Thu, Oct 06, 2016 at 10:41:36AM +0100, Alex Bligh wrote:
> Wouter,
[...]
> > Given that, given the issue in the previous
> > paragraph, and given the uncertainty introduced with multiple
> > connections, I think it is reasonable to say that a client should just
> > not assume a flush touches
Wouter,
>>> It is impossible for nbd to make such a guarantee, due to head-of-line
>>> blocking on TCP.
>>
>> this is perfectly accurate as far as it goes, but this isn't the current
>> NBD definition of 'flush'.
>
> I didn't read it that way.
>
>> That is (from the docs):
>>
>>> All write
Alex,
Christoph,
On Mon, Oct 03, 2016 at 12:34:33PM +0100, Alex Bligh wrote:
> On 3 Oct 2016, at 08:57, Christoph Hellwig wrote:
> >> Can you clarify what you mean by that? Why is it an "odd flush
> >> definition", and how would you "properly" define it?
> >
> > E.g. take
On 10/03/2016 07:34 AM, Alex Bligh wrote:
On 3 Oct 2016, at 08:57, Christoph Hellwig wrote:
Can you clarify what you mean by that? Why is it an "odd flush
definition", and how would you "properly" define it?
E.g. take the defintion from NVMe which also supports
> On 3 Oct 2016, at 08:57, Christoph Hellwig wrote:
>
>> Can you clarify what you mean by that? Why is it an "odd flush
>> definition", and how would you "properly" define it?
>
> E.g. take the defintion from NVMe which also supports multiple queues:
>
> "The Flush command
On Mon, Oct 03, 2016 at 09:51:49AM +0200, Wouter Verhelst wrote:
> Actually, I was pointing out the TCP head-of-line issue, where a delay
> on the socket that contains the flush reply would result in the arrival
> in the kernel block layer of a write reply before the said flush reply,
> resulting
On Mon, Oct 03, 2016 at 12:20:49AM -0700, Christoph Hellwig wrote:
> On Mon, Oct 03, 2016 at 01:47:06AM +, Josef Bacik wrote:
> > It's not "broken", it's working as designed, and any fs on top of this
> > patch will be perfectly safe because they all wait for their io to complete
> > before
On Sun, Oct 02, 2016 at 05:17:14PM +0100, Alex Bligh wrote:
> On 29 Sep 2016, at 17:59, Josef Bacik wrote:
> > Huh I missed that. Yeah that's not possible for us for sure, I think my
> > option
> > idea is the less awful way forward if we want to address that limitation.
> >
> On 29 Sep 2016, at 17:59, Josef Bacik wrote:
>
> On 09/29/2016 12:41 PM, Wouter Verhelst wrote:
>> On Thu, Sep 29, 2016 at 10:03:50AM -0400, Josef Bacik wrote:
>>> So think of it like normal disks with multiple channels. We don't send
>>> flushes
>>> down all the hwq's to
On 09/29/2016 12:41 PM, Wouter Verhelst wrote:
On Thu, Sep 29, 2016 at 10:03:50AM -0400, Josef Bacik wrote:
So think of it like normal disks with multiple channels. We don't send flushes
down all the hwq's to make sure they are clear, we leave that decision up to the
application (usually a FS
On Thu, Sep 29, 2016 at 10:03:50AM -0400, Josef Bacik wrote:
> So think of it like normal disks with multiple channels. We don't send
> flushes
> down all the hwq's to make sure they are clear, we leave that decision up to
> the
> application (usually a FS of course).
Well, when I asked
On 09/29/2016 05:52 AM, Wouter Verhelst wrote:
Hi Josef,
On Wed, Sep 28, 2016 at 04:01:32PM -0400, Josef Bacik wrote:
NBD can become contended on its single connection. We have to serialize all
writes and we can only process one read response at a time. Fix this by
allowing userspace to
Hi Josef,
On Wed, Sep 28, 2016 at 04:01:32PM -0400, Josef Bacik wrote:
> NBD can become contended on its single connection. We have to serialize all
> writes and we can only process one read response at a time. Fix this by
> allowing userspace to provide multiple connections to a single nbd
16 matches
Mail list logo