Il giorno 30/mag/2014, alle ore 17:39, Tejun Heo ha scritto:
> On Fri, May 30, 2014 at 11:37:18AM -0400, Tejun Heo wrote:
>> On Thu, May 29, 2014 at 11:05:33AM +0200, Paolo Valente wrote:
>>> diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
>>
Il giorno 30/mag/2014, alle ore 17:37, Tejun Heo ha scritto:
> On Thu, May 29, 2014 at 11:05:33AM +0200, Paolo Valente wrote:
>> diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
>> index 768fe44..cdd2528 100644
>> --- a/include/linux/cgroup_subs
Il giorno 30/mag/2014, alle ore 17:46, Tejun Heo ha scritto:
> On Thu, May 29, 2014 at 11:05:42AM +0200, Paolo Valente wrote:
>> This patch boosts the throughput on NCQ-capable flash-based devices,
>> while still preserving latency guarantees for interactive and soft
>> r
Il giorno 30/mag/2014, alle ore 18:07, Tejun Heo ha scritto:
> Hello, Paolo.
>
> On Thu, May 29, 2014 at 11:05:31AM +0200, Paolo Valente wrote:
>> this patchset introduces the last version of BFQ, a proportional-share
>> storage-I/O scheduler. BFQ also supports hierarc
ve tried to describe both detection
heuristics in patches 06 and 07.
Finally, as for adding to cfq the heuristics I have added to bfq, I think that
this would probably improve application latencies also with cfq. But, because
of the above facts, the result would unavoidably be worse than with b
Il giorno 31/mag/2014, alle ore 01:28, Tejun Heo ha scritto:
> Hello,
>
> On Sat, May 31, 2014 at 12:23:01AM +0200, Paolo Valente wrote:
>> I do agree that bfq has essentially the same purpose as cfq. I am
>> not sure that it is what you are proposing, but, in my opini
Il giorno 31/mag/2014, alle ore 13:52, Tejun Heo ha scritto:
> Hello, Paolo.
>
> So, I've actually looked at the code. Here are some questions.
>
> On Thu, May 29, 2014 at 11:05:42AM +0200, Paolo Valente wrote:
>> + * 1) all active queues have the same weight,
>
Il giorno 01/giu/2014, alle ore 02:03, Tejun Heo ha scritto:
> Hello,
>
> On Thu, May 29, 2014 at 11:05:41AM +0200, Paolo Valente wrote:
>> Unfortunately, in the following frequent case the mechanism
>> implemented in CFQ for detecting cooperating processes and merging
&
Il giorno 31/mag/2014, alle ore 15:54, Tejun Heo ha scritto:
> On Thu, May 29, 2014 at 11:05:40AM +0200, Paolo Valente wrote:
>> This patch introduces an heuristic that reduces latency when the
>> I/O-request pool is saturated. This goal is achieved by disabling
>> device i
Il giorno 31/mag/2014, alle ore 15:48, Tejun Heo ha scritto:
> On Thu, May 29, 2014 at 11:05:39AM +0200, Paolo Valente wrote:
>> This patch addresses this issue by not disabling device idling for
>
> This patch addresses this issue by allowing device idling for...
>
>&
Il giorno 02/giu/2014, alle ore 15:02, Pavel Machek ha scritto:
> Hi!
>
>>> Well, it's all about how to actually route the changes and in general
>>> whenever avoidable we try to avoid whole-sale code replacement
>>> especially when most of the structural code is similar like in this
>>> case.
Il giorno 03/giu/2014, alle ore 19:11, Tejun Heo ha scritto:
> Hello,
>
> On Mon, Jun 02, 2014 at 11:26:07AM +0200, Paolo Valente wrote:
>>>> #define cond_for_expiring_non_wr (bfqd->hw_tag && \
>>>> -
0.99%CPU
> # CFQ, no background load
> 8.51user 0.75system 30.99 (0m30.994s) elapsed 29.91%CPU
> # CFQ
> 8.70user 1.36system 74.72 (1m14.720s) elapsed 13.48%CPU
>
Definitely bad, we are about to repeat the test …
Thanks,
Paolo
>
Il giorno 03/giu/2014, alle ore 18:28, Tejun Heo ha scritto:
> Hello,
>
> On Mon, Jun 02, 2014 at 11:46:45AM +0200, Paolo Valente wrote:
>>> I don't really follow the last part. So, the difference is that
>>> cooperating queue setup also takes place during
Il giorno 04/giu/2014, alle ore 13:59, Takashi Iwai ha scritto:
> At Wed, 4 Jun 2014 12:24:30 +0200,
> Paolo Valente wrote:
>>
>>
>> Il giorno 04/giu/2014, alle ore 12:03, Pavel Machek ha
>> scritto:
>>
>>> Hi!
>>>
>>>>
otational or not.
> Its effect does and I don't think avoiding the overhead of keeping the
> counters is meaningful. Things like this make the code a lot harder
> to maintain in the long term as code is organized according to
> seemingly arbitrary optimization rather than semantic
certainly
cause the throughput to drop.
> And let's please document what we're catching with the
> extra attempts.
>
Definitely, thanks,
Paolo
> Thanks.
>
> --
> tejun
--
Paolo Valente
Il giorno 02/giu/2014, alle ore 16:29, Jens Axboe ha scritto:
> On 2014-05-30 23:16, Tejun Heo wrote:
>>> for turning patch #2 into a series of changes for CFQ instead. We need to
>>> end up with something where we can potentially bisect our way down to
>>> whatever caused any given regression.
Il giorno 19/giu/2014, alle ore 04:29, Jens Axboe ha scritto:
> On 2014-06-18 18:46, Tejun Heo wrote:
>> Hello,
>>
>> On Tue, Jun 17, 2014 at 05:55:57PM +0200, Paolo Valente wrote:
>>> In general, with both a smooth but messy and a sharp but clean
>>>
Il giorno 13/apr/2016, alle ore 22:41, Tejun Heo ha scritto:
> Hello,
>
> Sorry about the long delay.
>
> On Wed, Mar 09, 2016 at 07:34:15AM +0100, Paolo Valente wrote:
>> This is probably the focal point of our discussion. Unfortunately, I
>> am still not convi
Il giorno 14/apr/2016, alle ore 18:29, Tejun Heo ha scritto:
> Hello, Paolo.
>
> On Thu, Apr 14, 2016 at 12:23:14PM +0200, Paolo Valente wrote:
> ...
>>>> 1) Stable(r) and tight bandwidth distribution for mostly-sequential
>>>> reads/writes
>>&
Il giorno 15/apr/2016, alle ore 17:08, Tejun Heo ha scritto:
> Hello, Paolo.
>
> On Fri, Apr 15, 2016 at 04:20:44PM +0200, Paolo Valente wrote:
>>> It's actually a lot more difficult to answer that with bandwidth
>>> scheduling. Let's say cgroup A h
Il giorno 15/apr/2016, alle ore 21:29, Tejun Heo ha scritto:
> Hello, Paolo.
>
> On Fri, Apr 15, 2016 at 06:17:55PM +0200, Paolo Valente wrote:
>>> I don't think that is true with time based scheduling. If you
>>> allocate 50% of time, it'll get close to 5
Il giorno 25/apr/2016, alle ore 22:30, Paolo ha
scritto:
> Il 25/04/2016 21:24, Tejun Heo ha scritto:
>> Hello, Paolo.
>>
>
> Hi
>
>> On Sat, Apr 23, 2016 at 09:07:47AM +0200, Paolo Valente wrote:
>>> There is certainly something I don’t know here, b
association in bio_split.
Signed-off-by: Paolo Valente
---
block/bio.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/block/bio.c b/block/bio.c
index 807d25e..c4a3834 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1811,6 +1811,11 @@ struct bio *bio_split(struct bio *bio, int sectors
association in bio_split.
Signed-off-by: Paolo Valente
---
block/bio.c | 15 +++
include/linux/bio.h | 2 ++
2 files changed, 17 insertions(+)
diff --git a/block/bio.c b/block/bio.c
index 807d25e..20795eb 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1811,6 +1811,8
association in bio-cloning functions.
Signed-off-by: Paolo Valente
---
block/bio.c | 17 +
fs/btrfs/extent_io.c | 6 --
include/linux/bio.h | 2 ++
3 files changed, 19 insertions(+), 6 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 807d25e..e9b136a 100644
association in bio-cloning functions.
Signed-off-by: Paolo Valente
---
block/bio.c | 17 +
fs/btrfs/extent_io.c | 6 --
include/linux/bio.h | 3 +++
3 files changed, 20 insertions(+), 6 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 807d25e..e9b136a 100644
association in bio-cloning functions.
Signed-off-by: Paolo Valente
---
block/bio.c | 15 +++
fs/btrfs/extent_io.c | 6 --
include/linux/bio.h | 3 +++
3 files changed, 18 insertions(+), 6 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 807d25e..5eaf0b5 100644
association in bio-cloning functions.
Signed-off-by: Paolo Valente
Reviewed-by: Nikolay Borisov
---
Tejun: I didn't add also your Ack, just because this version is slightly
different from the one you acked.
---
block/bio.c | 15 +++
fs/btrfs/extent_io.c | 6 --
include/
Hi Jens,
this is just to ask you whether you made any decision about these patches,
including just not to apply them.
Thanks,
Paolo
Il giorno 03/nov/2015, alle ore 10:01, Paolo Valente
ha scritto:
>
> Il giorno 02/nov/2015, alle ore 17:14, Jens Axboe ha scritto:
>
>> On 1
type of completion_nsec to unsigned long
Paolo Valente (1):
null_blk: set a separate timer for each command
drivers/block/null_blk.c | 94 +---
1 file changed, 33 insertions(+), 61 deletions(-)
--
1.9.1
--
To unsubscribe from this list: send the line
From: Arianna Avanzini
This commit at least doubles the maximum value for
completion_nsec. This helps in special cases where one wants/needs to
emulate an extremely slow I/O (for example to spot bugs).
Signed-off-by: Paolo Valente
Signed-off-by: Arianna Avanzini
---
drivers/block/null_blk.c
device is properly restarted for all irqmodes on completions.
Signed-off-by: Paolo Valente
Signed-off-by: Arianna AVanzini
---
Changes V1->V2
- reinstated mq_ops check
drivers/block/null_blk.c | 27 +++
1 file changed, 15 insertions(+), 12 deletions(-)
diff --gi
desired one.
This commit addresses this issue by replacing per-CPU timers with
per-command timers, i.e., by associating an individual timer with each
command.
Signed-off-by: Paolo Valente
Signed-off-by: Arianna Avanzini
---
drivers/block/null_blk.c | 79
Hi,
your first statement “bfq is using bandwidth as the virtual time" is not very
clear to me. In contrast, after that you raise a clear, important issue. So, I
will first try to sync with you on your first statement (and hopefully help
find related bugs in the documentation of BFQ). Then I will
Il giorno 11/feb/2016, alle ore 23:28, Tejun Heo ha scritto:
> Hello,
>
> On Mon, Feb 01, 2016 at 11:12:46PM +0100, Paolo Valente wrote:
>> From: Arianna Avanzini
>>
>> Complete support for full hierarchical scheduling, with a cgroups
>> interface. Th
Il giorno 22/apr/2016, alle ore 20:13, Tejun Heo ha scritto:
> Hello, Paolo.
>
> On Wed, Apr 20, 2016 at 11:32:23AM +0200, Paolo wrote:
>> This malfunction seems related to a blkcg behavior that I did not
>> expect: the sequential writer changes group continuously. It moves
>> from the root gro
Il giorno 22/apr/2016, alle ore 20:41, Tejun Heo ha scritto:
> Hello, Paolo.
>
> On Fri, Apr 22, 2016 at 08:19:47PM +0200, Paolo Valente wrote:
>>> So, a kworker would jump through different workqueues and issue IOs
>>> for different writeback domains and the c
Il giorno 01/mar/2016, alle ore 19:46, Tejun Heo ha scritto:
> Hello, Paolo.
>
> Sorry about the delay.
>
> On Sat, Feb 20, 2016 at 11:23:43AM +0100, Paolo Valente wrote:
>> Before replying to your points, I want to stress that I'm not a
>> champion of budg
Il giorno 04/mar/2016, alle ore 18:39, Christoph Hellwig
ha scritto:
> On Sat, Mar 05, 2016 at 12:29:39AM +0700, Linus Walleij wrote:
>> Hi Tejun,
>>
>> I'm doing a summary of this discussion as a part of presenting
>> Linaro's involvement in Paolo's work. So I try to understand things.
>
> B
Hi
Il giorno 16/apr/2016, alle ore 00:45, Tejun Heo ha scritto:
> Hello, Paolo.
>
> On Sat, Apr 16, 2016 at 12:08:44AM +0200, Paolo Valente wrote:
>> Maybe the source of confusion is the fact that a simple sector-based,
>> proportional share scheduler always distri
Il giorno 22/apr/2016, alle ore 21:32, Tejun Heo ha scritto:
> Hello, Paolo.
>
> On Fri, Apr 22, 2016 at 09:05:14PM +0200, Paolo Valente wrote:
>>> Ah, right, I was confused. cic is always associated with the task and
>>> yes a writeback worker can trigger blkcg c
From: Arianna Avanzini
This commit at least doubles the maximum value for
completion_nsec. This helps in special cases where one wants/needs to
emulate an extremely slow I/O (for example to spot bugs).
Signed-off-by: Paolo Valente
Signed-off-by: Arianna Avanzini
---
drivers/block/null_blk.c
null_blk: change type of completion_nsec to unsigned long
Paolo Valente (1):
null_blk: set a separate timer for each command
drivers/block/null_blk.c | 94 +---
1 file changed, 33 insertions(+), 61 deletions(-)
--
1.9.1
--
To unsubscribe from this
device is properly restarted for all irqmodes on completions.
Signed-off-by: Paolo Valente
Signed-off-by: Arianna AVanzini
---
drivers/block/null_blk.c | 29 -
1 file changed, 16 insertions(+), 13 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block
desired one.
This commit addresses this issue by replacing per-CPU timers with
per-command timers, i.e., by associating an individual timer with each
command.
Signed-off-by: Paolo Valente
Signed-off-by: Arianna Avanzini
---
drivers/block/null_blk.c | 79
Il giorno 02/nov/2015, alle ore 17:25, Jens Axboe ha scritto:
> On 11/02/2015 07:31 AM, Paolo Valente wrote:
>> From: Arianna Avanzini
>>
>> In single-queue (block layer) mode,the function null_rq_prep_fn stops
>> the device if alloc_cmd fails. Then, once
Il giorno 02/nov/2015, alle ore 17:14, Jens Axboe ha scritto:
> On 11/02/2015 07:31 AM, Paolo Valente wrote:
>> For the Timer IRQ mode (i.e., when command completions are delayed),
>> there is one timer for each CPU. Each of these timers
>> . has a completion que
Hi,
I see an important interface problem. Userspace has been waiting for
io.weight to become eventually the file name for setting the weight
for the proportional-share policy [1,2]. If you use that name, how
will we solve this?
Thanks,
Paolo
[1] https://github.com/systemd/systemd/issues/7057#is
Jens, Tejun,
can we proceed with this double-interface solution?
Thanks,
Paolo
> Il giorno 1 ott 2019, alle ore 21:33, Paolo Valente
> ha scritto:
>
> Hi Jens,
>
> the first patch in this series is Tejun's patch for making BFQ disable
> io.cost. The second patch
unexpected loss of control on per-queue service
guarantees.
This commit solves this problem by adding an extra field, which stores
the actual last request dispatched for the in-service queue, and by
using this new field to correctly check case (b).
Signed-off-by: Paolo Valente
---
block/bfq-ios
Hi,
this batch of patches provides fixes and improvements for throughput
and latency. Every patch has been under test for at least one month,
some patches for much longer.
Thanks,
Paolo
Paolo Valente (14):
block, bfq: do not consider interactive queues in srt filtering
block, bfq: avoid
on a
fast device), then soft_rt_next_start is assigned such a high value
that, for a very long time, the queue is prevented from being possibly
considered as soft real time.
This commit removes the updating of soft_rt_next_start for bfq_queues
in interactive weight raising.
Signed-off-by: Paolo
This is a preparatory commit for commits that need to check only one
of the two main reasons for idling. This change should also improve
the quality of the code a little bit, by splitting a function that
contains very long, non-trivial and little related comments.
Signed-off-by: Paolo Valente
selected for service. This would only
cause useless overhead. This commit avoids such a useless selection.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index c7a4a15c7c19
arrives. This commit draws this missing distinction and does
not perform harmful plugging.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 31 +--
1 file changed, 17 insertions(+), 14 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index
s.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 346 +---
1 file changed, 165 insertions(+), 181 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index a6fe60114ade..c1bb5e5fcdc4 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq
-off-by: Paolo Valente
---
block/bfq-iosched.c | 12 +++-
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index c1bb5e5fcdc4..12228af16198 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -235,6 +235,11 @@ static struct
and issue I/O with a low depth.
To reduce false negatives, this commit lowers the threshold.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index bf585ad29bb5..48b579032d14
-by: Paolo Valente
---
block/bfq-iosched.c | 86 -
block/bfq-iosched.h | 8 +++--
block/bfq-wf2q.c| 12 +--
3 files changed, 59 insertions(+), 47 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index a9275ed57726
strong I/O
control. bfq does this enforcing when the scenario is asymmetric,
i.e., when some bfq_queue or group of bfq_queues is to be granted a
different bandwidth than some other bfq_queue or group of
bfq_queues. So, in such a scenario, this commit disables write
overcharging.
Signed-off-by: Paolo
ero hw_tag. But this is because cfq doesn't dispatch enough requests
instead of hardware queue doesn't work. Don't zero hw_tag in such case.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/block/bfq-iosched
uest could happen to be redirected to a different
bfq_queue. As a consequence, the destination bfq_queue stored in the
request could be wrong. Such an event does not need to ba handled any
longer.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 2 --
1 file changed, 2 deletions(-)
diff --g
further reference to the queue when the
weight of a queue is added, because the queue might otherwise be freed
before bfq_weights_tree_remove is invoked. This commit adds this
reference and makes all related modifications.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 17
inconsistent. This commit
solves this problem by lower-bounding the budget computed in
bfq_updated_next_req to the service currently charged to the queue.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/block/bfq-iosched.c b
> Il giorno 7 dic 2018, alle ore 15:40, Jens Axboe ha scritto:
>
> On 12/7/18 3:01 AM, Paolo Valente wrote:
>>
>>
>>> Il giorno 7 dic 2018, alle ore 03:23, Jens Axboe ha
>>> scritto:
>>>
>>> On 12/6/18 11:18 AM, Paolo Valente wrote
> Il giorno 18 gen 2019, alle ore 11:31, Andrea Righi
> ha scritto:
>
> This is a redesign of my old cgroup-io-throttle controller:
> https://lwn.net/Articles/330531/
>
> I'm resuming this old patch to point out a problem that I think is still
> not solved completely.
>
> = Problem =
>
> T
> Il giorno 18 gen 2019, alle ore 12:10, Andrea Righi
> ha scritto:
>
> On Fri, Jan 18, 2019 at 12:04:17PM +0100, Paolo Valente wrote:
>>
>>
>>> Il giorno 18 gen 2019, alle ore 11:31, Andrea Righi
>>> ha scritto:
>>>
>>>
This reverts commit f0635b8a416e3b99dc6fd9ac3ce534764869d0c8.
---
block/bfq-iosched.c | 117 +---
1 file changed, 57 insertions(+), 60 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 8cc3032b66de..92214d58510c 100644
--- a/block/bf
This reverts commit bd7d4ef6a4c9b3611fa487a0065bf042c71ce620.
---
block/bfq-iosched.c | 15 ---
block/bfq-iosched.h | 6 ++
2 files changed, 14 insertions(+), 7 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index cd307767a134..8cc3032b66de 100644
--- a/block
unused variable"). The problem went away.
Maybe the assumption in commit f0635b8a416e ("bfq: calculate shallow
depths at init time") does not hold true?
Thanks,
Paolo
[1] https://bugzilla.kernel.org/show_bug.cgi?id=200813
Paolo Valente (2):
Revert "bfq-iosched: re
> Il giorno 18 gen 2019, alle ore 17:35, Josef Bacik ha
> scritto:
>
> On Fri, Jan 18, 2019 at 11:31:24AM +0100, Andrea Righi wrote:
>> This is a redesign of my old cgroup-io-throttle controller:
>> https://lwn.net/Articles/330531/
>>
>> I'm resuming this old patch to point out a problem tha
> Il giorno 18 gen 2019, alle ore 14:35, Jens Axboe ha
> scritto:
>
> On 1/18/19 4:52 AM, Paolo Valente wrote:
>> Hi Jens,
>> a user reported a warning, followed by freezes, in case he increases
>> nr_requests to more than 64 [1]. After reproducing the is
> Il giorno 22 mag 2019, alle ore 12:01, Srivatsa S. Bhat
> ha scritto:
>
> On 5/22/19 2:09 AM, Paolo Valente wrote:
>>
>> First, thank you very much for testing my patches, and, above all, for
>> sharing those huge traces!
>>
>> According
> Il giorno 20 mag 2019, alle ore 12:19, Paolo Valente
> ha scritto:
>
>
>
>> Il giorno 18 mag 2019, alle ore 22:50, Srivatsa S. Bhat
>> ha scritto:
>>
>> On 5/18/19 11:39 AM, Paolo Valente wrote:
>>> I've addressed these issues in my
> Il giorno 1 apr 2019, alle ore 10:55, Dmitrii Tcvetkov
> ha scritto:
>
> On Mon, 1 Apr 2019 09:29:16 +0200
> Paolo Valente wrote:
>>
>>
>>> Il giorno 29 mar 2019, alle ore 15:10, Jens Axboe
>>> ha scritto:
>>>
>>> On 3/2
970 PRO,
gnome-terminal starts in 1.5 seconds after this fix, against 15
seconds before the fix (as a reference, gnome-terminal takes about 35
seconds to start with any of the other I/O schedulers).
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 67 +-
Hi Jens,
this is a hopefully complete version of the fix proposed by Guenter [1].
Thanks,
Paolo
[1] https://lkml.org/lkml/2019/7/22/824
Paolo Valente (1):
block, bfq: handle NULL return value by bfq_init_rq()
block/bfq-iosched.c | 14 +++---
1 file changed, 11 insertions(+), 3
> Il giorno 8 ago 2019, alle ore 11:05, Sander Eikelenboom
> ha scritto:
>
> L.S.,
>
> While testing a linux 5.3-rc3 kernel on my Xen server I come across the splat
> below when trying to shutdown all the VM's.
> This is after the server has ran for a few days without any problem. It seems
> Il giorno 8 ago 2019, alle ore 12:21, Sander Eikelenboom
> ha scritto:
>
> On 08/08/2019 11:10, Paolo Valente wrote:
>>
>>
>>> Il giorno 8 ago 2019, alle ore 11:05, Sander Eikelenboom
>>> ha scritto:
>>>
>>> L.S.,
>>&
> Il giorno 6 giu 2019, alle ore 12:26, Christoph Hellwig ha
> scritto:
>
> This option is entirely bfq specific, give it an appropinquate name.
>
> Also make it depend on CONFIG_BFQ_GROUP_IOSCHED in Kconfig, as all
> the functionality already does so anyway.
>
aolo
Francesco Pollicino (2):
block, bfq: print SHARED instead of pid for shared queues in logs
block, bfq: save & resume weight on a queue merge/split
Paolo Valente (7):
block, bfq: increase idling for weight-raised queues
block, bfq: do not idle for lowest-weight queues
block
second, instead of 2 seconds, if, in parallel, five files are
being read sequentially, and five more files are being written
sequentially
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index
than normal processes. Before
this commit, the total throughput was ~80 MB/sec on a PLEXTOR PX-256M5,
after this commit it is ~100 MB/sec.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 204 +---
block/bfq-iosched.h | 6 +-
block/bfq-wf2q.c
ith the
journaling daemon enjoying a higher weight than normal processes.
With this commit, the throughput grows from ~100 MB/s to ~150 MB/s on
a PLEXTOR PX-256M5.
Tested-by: Francesco Pollicino
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 417
s issue by printing SHARED instead of a pid
if the queue is shared.
Signed-off-by: Francesco Pollicino
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 10 ++
block/bfq-iosched.h | 23 +++
2 files changed, 29 insertions(+), 4 deletions(-)
diff --git a/block/bfq
The execution time of BFQ has been slightly lowered. Report the new
execution time in BFQ documentation.
Signed-off-by: Paolo Valente
---
Documentation/block/bfq-iosched.txt | 29 ++---
1 file changed, 22 insertions(+), 7 deletions(-)
diff --git a/Documentation/block
of control on process latencies.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 64 -
1 file changed, 51 insertions(+), 13 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index d34b80e5c47d..500b04df9efa 100644
--- a/block
-off-by: Paolo Valente
---
block/bfq-iosched.c | 14 ++
1 file changed, 14 insertions(+)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index b96be3764b8a..d34b80e5c47d 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -242,6 +242,14 @@ static struct kmem_cache
ctly resumed when the queue is
recycled, then the weight of the recycled queue could differ
from the weight of the original queue.
This commit adds the missing save & resume of the weight.
Signed-off-by: Francesco Pollicino
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 2 ++
b
ulers
basically just pass I/O requests to the drive as fast as possible.
[1] https://github.com/Algodev-github/S
Tested-by: Francesco Pollicino
Signed-off-by: Alessio Masola
Signed-off-by: Paolo Valente
---
block/bfq-cgroup.c | 3 +-
block/bfq-iosched.c
Paolo
> This time when building without CONFIG_BFQ_GROUP_IOSCHED, see below..
>
> On 3/10/19 7:11 PM, Paolo Valente wrote:
>> From: Francesco Pollicino
>> The function "bfq_log_bfqq" prints the pid of the process
>> associated with the queue passed as input.
>>
l.org/lkml/2019/1/29/368
Thanks,
Paolo
Francesco Pollicino (2):
block, bfq: print SHARED instead of pid for shared queues in logs
block, bfq: save & resume weight on a queue merge/split
Paolo Valente (6):
block, bfq: increase idling for weight-raised queues
block, bfq: do not idle f
ith the
journaling daemon enjoying a higher weight than normal processes.
With this commit, the throughput grows from ~100 MB/s to ~150 MB/s on
a PLEXTOR PX-256M5.
Tested-by: Francesco Pollicino
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 417
than normal processes. Before
this commit, the total throughput was ~80 MB/sec on a PLEXTOR PX-256M5,
after this commit it is ~100 MB/sec.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 204 +---
block/bfq-iosched.h | 6 +-
block/bfq-wf2q.c
ulers
basically just pass I/O requests to the drive as fast as possible.
[1] https://github.com/Algodev-github/S
Tested-by: Francesco Pollicino
Signed-off-by: Alessio Masola
Signed-off-by: Paolo Valente
---
block/bfq-cgroup.c | 3 +-
block/bfq-iosched.c
of control on process latencies.
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 64 -
1 file changed, 51 insertions(+), 13 deletions(-)
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index d34b80e5c47d..500b04df9efa 100644
--- a/block
ctly resumed when the queue is
recycled, then the weight of the recycled queue could differ
from the weight of the original queue.
This commit adds the missing save & resume of the weight.
Signed-off-by: Francesco Pollicino
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 2 ++
b
s issue by printing SHARED instead of a pid
if the queue is shared.
Signed-off-by: Francesco Pollicino
Signed-off-by: Paolo Valente
---
block/bfq-iosched.c | 10 ++
block/bfq-iosched.h | 18 --
2 files changed, 26 insertions(+), 2 deletions(-)
diff --git a/block/bfq-i
201 - 300 of 646 matches
Mail list logo