Hi,
I think one fix is to use "_local_bh_enable()" instead of "local_bh_enable()"
in u64_stats_fetch_retry_bh(). There's no enabling of irq in
_local_bh_enable().
But I wonder why we do "WARN_ON_ONCE(!irqs_disabled()
)" in
_local_bh_enable()? What's the bad thing if someone call
Hi,
I think one fix is to use _local_bh_enable() instead of local_bh_enable()
in u64_stats_fetch_retry_bh(). There's no enabling of irq in
_local_bh_enable().
But I wonder why we do WARN_ON_ONCE(!irqs_disabled()
) in
_local_bh_enable()? What's the bad thing if someone call
_local_bh_enable()
Hi, Tejun, Vivek and Jens,
I did tests and you affirmed the idea, and Vivek said he'll review the
last version of the patch. But it seems he left blkio area more than
half year. What next should I do to make progress ?
BRs
Zhiguo
On Sun, Oct 20, 2013 at 8:11 PM, Hong Zhiguo wrote:
> Token
Hi, Tejun, Vivek and Jens,
I did tests and you affirmed the idea, and Vivek said he'll review the
last version of the patch. But it seems he left blkio area more than
half year. What next should I do to make progress ?
BRs
Zhiguo
On Sun, Oct 20, 2013 at 8:11 PM, Hong Zhiguo honk...@gmail.com
Hi,
Thanks for the report. The q->queue_lock may be taken in irq. And in
sys_read() context, we hold q->queue_lock with irq disabled and then
called local_bh_enable, which turned irq enabled. This leads to
potential double acquire of queue_lock.
One fix is to use "_local_bh_enable()" instead of
Hi,
Thanks for the report. The q-queue_lock may be taken in irq. And in
sys_read() context, we hold q-queue_lock with irq disabled and then
called local_bh_enable, which turned irq enabled. This leads to
potential double acquire of queue_lock.
One fix is to use _local_bh_enable() instead of
Hi, Vivek,
I tested the PATCH v4 for some basic hierarchical setup as I did
before. And I get the similar result.
Preparation
1) mount subsys blkio with "__DEVEL__sane_behavior"
2) Create 3 levels of directories under the blkio mount point:
mkdir 1
mkdir 1/2
mkdir 1/2/3
Hi, Vivek,
I tested the PATCH v4 for some basic hierarchical setup as I did
before. And I get the similar result.
Preparation
1) mount subsys blkio with __DEVEL__sane_behavior
2) Create 3 levels of directories under the blkio mount point:
mkdir 1
mkdir 1/2
mkdir 1/2/3
Hi, Vivek,
Sorry I don't have time to test them carefully this week. I'll do it
in this weekend.
The updating of nr_queued_tree level by level only happens once for
each bio. We have already the computing(and maybe queue operation)
level by level for every bio, even it's passed through without
Hi, Vivek,
Sorry I don't have time to test them carefully this week. I'll do it
in this weekend.
The updating of nr_queued_tree level by level only happens once for
each bio. We have already the computing(and maybe queue operation)
level by level for every bio, even it's passed through without
Hi, Vivek,
Thanks for your comments. I didn't realize the ancestor over-trim issue before.
Trimming of iops token is not necessary. Since a bio always costs
_one_ iops token. Trimming it won't change the fact that current iops
token is zero or not.
For hierarchical triming, as you pointed out,
Hi, Vivek,
Thanks for your comments. I didn't realize the ancestor over-trim issue before.
Trimming of iops token is not necessary. Since a bio always costs
_one_ iops token. Trimming it won't change the fact that current iops
token is zero or not.
For hierarchical triming, as you pointed out,
ote:
> On Wed, Oct 16, 2013 at 02:09:40PM +0800, Hong zhi guo wrote:
>> Hi, Vivek,
>>
>> Thanks for your careful review. I'll rename t_c to last_dispatch, it's
>> much better.
>>
>> For the big burst issue, I've different opinion. Let's discuss it.
>>
>
Hi, Vivek,
Thanks for your careful review. I'll rename t_c to last_dispatch, it's
much better.
For the big burst issue, I've different opinion. Let's discuss it.
Any time a big IO means a big burst. Even if it's throttled at first
time, queued in
the service_queue, and then waited for a while,
Hi, Vivek,
Thanks for your careful review. I'll rename t_c to last_dispatch, it's
much better.
For the big burst issue, I've different opinion. Let's discuss it.
Any time a big IO means a big burst. Even if it's throttled at first
time, queued in
the service_queue, and then waited for a while,
...@redhat.com wrote:
On Wed, Oct 16, 2013 at 02:09:40PM +0800, Hong zhi guo wrote:
Hi, Vivek,
Thanks for your careful review. I'll rename t_c to last_dispatch, it's
much better.
For the big burst issue, I've different opinion. Let's discuss it.
Any time a big IO means a big burst. Even if it's
Hi, Tejun,
I did the test for 3 levels hierarchy. It works.
Preparation
1) mount subsys blkio with "__DEVEL__sane_behavior"
2) Create 3 levels of directories under the blkio mount point:
mkdir 1
mkdir 1/2
mkdir 1/2/3
3) start 3 bash sessions, write their PIDs into:
Hi, Tejun,
I did the test for 3 levels hierarchy. It works.
Preparation
1) mount subsys blkio with __DEVEL__sane_behavior
2) Create 3 levels of directories under the blkio mount point:
mkdir 1
mkdir 1/2
mkdir 1/2/3
3) start 3 bash sessions, write their PIDs into:
Hi, Tejun,
I've not tested hierarchical setup yet. I'll do it tomorrow.
BTW, what kind of setup do you expect? Is hierarchy of 2 levels enough?
Zhiguo
On Mon, Oct 14, 2013 at 9:36 PM, Tejun Heo wrote:
> Hello,
>
> Yes, this definitely is the direction we wanna take it. I'll wait for
> Vivek
Hi, Tejun,
I've not tested hierarchical setup yet. I'll do it tomorrow.
BTW, what kind of setup do you expect? Is 2 levels of hierarchical enough?
Zhiguo
On Mon, Oct 14, 2013 at 9:36 PM, Tejun Heo wrote:
> Hello,
>
> Yes, this definitely is the direction we wanna take it. I'll wait for
>
Hi, Tejun,
I've not tested hierarchical setup yet. I'll do it tomorrow.
BTW, what kind of setup do you expect? Is 2 levels of hierarchical enough?
Zhiguo
On Mon, Oct 14, 2013 at 9:36 PM, Tejun Heo t...@kernel.org wrote:
Hello,
Yes, this definitely is the direction we wanna take it. I'll
Hi, Tejun,
I've not tested hierarchical setup yet. I'll do it tomorrow.
BTW, what kind of setup do you expect? Is hierarchy of 2 levels enough?
Zhiguo
On Mon, Oct 14, 2013 at 9:36 PM, Tejun Heo t...@kernel.org wrote:
Hello,
Yes, this definitely is the direction we wanna take it. I'll wait
Thanks, Thomas. But I didn't change any formatting. Just do the
substitution in place.
Should I re-format and re-send the patch?
On Thu, Mar 28, 2013 at 10:32 PM, Thomas Graf wrote:
> On 03/28/13 at 12:47am, Hong Zhiguo wrote:
>> diff --git a/net/ipv4/udp_diag.c b/net/ipv4/udp_diag.c
>> index
Thanks, Thomas. But I didn't change any formatting. Just do the
substitution in place.
On Thu, Mar 28, 2013 at 10:45 PM, Thomas Graf wrote:
> On 03/28/13 at 12:53am, Hong Zhiguo wrote:
>> Signed-off-by: Hong Zhiguo
>
> There are some formatting errors but the Netlink bits themselves
> look
Thanks, Thomas. But I didn't change any formatting. Just do the
substitution in place.
On Thu, Mar 28, 2013 at 10:45 PM, Thomas Graf tg...@suug.ch wrote:
On 03/28/13 at 12:53am, Hong Zhiguo wrote:
Signed-off-by: Hong Zhiguo honk...@gmail.com
There are some formatting errors but the Netlink
Thanks, Thomas. But I didn't change any formatting. Just do the
substitution in place.
Should I re-format and re-send the patch?
On Thu, Mar 28, 2013 at 10:32 PM, Thomas Graf tg...@suug.ch wrote:
On 03/28/13 at 12:47am, Hong Zhiguo wrote:
diff --git a/net/ipv4/udp_diag.c b/net/ipv4/udp_diag.c
26 matches
Mail list logo