On Wed, Oct 5, 2016 at 12:16 PM, Bart Van Assche
wrote:
> On 10/01/16 15:56, Ming Lei wrote:
>>
>> If we just call the rcu/srcu read lock(or the mutex) around .queue_rq(),
>> the
>> above code needn't to be duplicated any more.
>
>
> Hello Ming,
>
> Can you have a look
On Tue, Oct 4, 2016 at 6:00 AM, Keith Busch wrote:
> On Tue, Sep 27, 2016 at 05:25:36PM +0800, Ming Lei wrote:
>> On Mon, 26 Sep 2016 19:00:30 -0400
>> Keith Busch wrote:
>>
>> > The only user of polling requires its original request be completed in
Hi Sergey,
On Tue, Oct 04, 2016 at 01:43:14PM +0900, Sergey Senozhatsky wrote:
< snip >
> TEST
>
>
> new tests results; same tests, same conditions, same .config.
> 4-way test:
> - BASE zram, fio direct=1
> - BASE zram, fio fsync_on_close=1
> - NEW zram, fio direct=1
> - NEW zram, fio
Hello, Adam.
On Tue, Oct 04, 2016 at 08:49:18AM -0700, Adam Manzanares wrote:
> > I wonder whether the right thing to do is adding bio->bi_ioprio which
> > is initialized on bio submission and carried through req->ioprio.
>
> I looked around and thought about this and I'm not sure if this will
Hello, Paolo.
On Tue, Oct 04, 2016 at 09:29:48PM +0200, Paolo Valente wrote:
> > Hmm... I think we already discussed this but here's a really simple
> > case. There are three unknown workloads A, B and C and we want to
> > give A certain best-effort guarantees (let's say around 80% of the
> >
> Il giorno 04 ott 2016, alle ore 21:14, Tejun Heo ha scritto:
>
> Hello, Paolo.
>
> On Tue, Oct 04, 2016 at 09:02:47PM +0200, Paolo Valente wrote:
>> That's exactly what BFQ has succeeded in doing in all the tests
>> devised so far. Can you give me a concrete example for
Hello, Paolo.
On Tue, Oct 04, 2016 at 09:02:47PM +0200, Paolo Valente wrote:
> That's exactly what BFQ has succeeded in doing in all the tests
> devised so far. Can you give me a concrete example for which I can
> try with BFQ and with any other mechanism you deem better. If
> you are right,
> Il giorno 04 ott 2016, alle ore 17:56, Tejun Heo ha scritto:
>
> Hello, Vivek.
>
> On Tue, Oct 04, 2016 at 09:28:05AM -0400, Vivek Goyal wrote:
>> On Mon, Oct 03, 2016 at 02:20:19PM -0700, Shaohua Li wrote:
>>> Hi,
>>>
>>> The background is we don't have an ioscheduler for
Hello, Vivek.
On Tue, Oct 04, 2016 at 09:28:05AM -0400, Vivek Goyal wrote:
> On Mon, Oct 03, 2016 at 02:20:19PM -0700, Shaohua Li wrote:
> > Hi,
> >
> > The background is we don't have an ioscheduler for blk-mq yet, so we can't
> > prioritize processes/cgroups.
>
> So this is an interim
Hello Tejun,
10/02/2016 10:53, Tejun Heo wrote:
> Hello, Adam.
>
> On Fri, Sep 30, 2016 at 09:02:17AM -0700, Adam Manzanares wrote:
> > I'll start with the changes I made and work my way through a grep of
> >
> > ioprio. Please add or correct any of the assumptions I have made.
On Mon, Oct 03, 2016 at 02:20:19PM -0700, Shaohua Li wrote:
> Hi,
>
> The background is we don't have an ioscheduler for blk-mq yet, so we can't
> prioritize processes/cgroups.
So this is an interim solution till we have ioscheduler for blk-mq?
> This patch set tries to add basic arbitration
>
Wouter,
>>> It is impossible for nbd to make such a guarantee, due to head-of-line
>>> blocking on TCP.
>>
>> this is perfectly accurate as far as it goes, but this isn't the current
>> NBD definition of 'flush'.
>
> I didn't read it that way.
>
>> That is (from the docs):
>>
>>> All write
Hi Sergey,
On Tue, Oct 04, 2016 at 01:43:14PM +0900, Sergey Senozhatsky wrote:
> Hello,
>
> Cc Jens and block-dev,
>
> I'll outline the commit message for Jens and blockdev people, may be
> someone will have some thoughts/ideas/opinions:
Thanks for Ccing relevant poeple. Even, I didn't know we
13 matches
Mail list logo