On Thu, Jun 28, 2018 at 08:38:34AM +0800, Ye Xiaolong wrote:
> Update the result:
>
> testcase/path_params/tbox_group/run: will-it-scale/poll2-performance/lkp-sb03
So this looks like a huge improvement in the per process ops, but not
as large as the original regression, and no change in the
On Thu, Jun 28, 2018 at 08:38:34AM +0800, Ye Xiaolong wrote:
> Update the result:
>
> testcase/path_params/tbox_group/run: will-it-scale/poll2-performance/lkp-sb03
So this looks like a huge improvement in the per process ops, but not
as large as the original regression, and no change in the
On 06/27, Christoph Hellwig wrote:
>On Tue, Jun 26, 2018 at 02:03:38PM +0800, Ye Xiaolong wrote:
>> Hi,
>>
>> On 06/22, Christoph Hellwig wrote:
>> >Hi Xiaolong,
>> >
>> >can you retest this workload on the following branch:
>> >
>> >git://git.infradead.org/users/hch/vfs.git
On 06/27, Christoph Hellwig wrote:
>On Tue, Jun 26, 2018 at 02:03:38PM +0800, Ye Xiaolong wrote:
>> Hi,
>>
>> On 06/22, Christoph Hellwig wrote:
>> >Hi Xiaolong,
>> >
>> >can you retest this workload on the following branch:
>> >
>> >git://git.infradead.org/users/hch/vfs.git
On Tue, Jun 26, 2018 at 02:03:38PM +0800, Ye Xiaolong wrote:
> Hi,
>
> On 06/22, Christoph Hellwig wrote:
> >Hi Xiaolong,
> >
> >can you retest this workload on the following branch:
> >
> >git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
> >
> >Gitweb:
> >
> >
> >
On Tue, Jun 26, 2018 at 02:03:38PM +0800, Ye Xiaolong wrote:
> Hi,
>
> On 06/22, Christoph Hellwig wrote:
> >Hi Xiaolong,
> >
> >can you retest this workload on the following branch:
> >
> >git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
> >
> >Gitweb:
> >
> >
> >
Hi,
On 06/22, Christoph Hellwig wrote:
>Hi Xiaolong,
>
>can you retest this workload on the following branch:
>
>git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
>
>Gitweb:
>
>
> http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head
Here is the
Hi,
On 06/22, Christoph Hellwig wrote:
>Hi Xiaolong,
>
>can you retest this workload on the following branch:
>
>git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
>
>Gitweb:
>
>
> http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head
Here is the
On Fri, Jun 22, 2018 at 09:02:55PM +0100, Al Viro wrote:
> > While at the same time corect poll code already checks net_busy_loop_on
> > to set POLL_BUSY_LOOP. So except for sockets where people set the
> > timeout to 0 the code already does the right thing as-is. IMHO not
> > really worth
On Fri, Jun 22, 2018 at 09:02:55PM +0100, Al Viro wrote:
> > While at the same time corect poll code already checks net_busy_loop_on
> > to set POLL_BUSY_LOOP. So except for sockets where people set the
> > timeout to 0 the code already does the right thing as-is. IMHO not
> > really worth
On Fri, Jun 22, 2018 at 06:18:02PM +0200, Christoph Hellwig wrote:
> On Fri, Jun 22, 2018 at 05:28:50PM +0200, Christoph Hellwig wrote:
> > On Fri, Jun 22, 2018 at 04:14:09PM +0100, Al Viro wrote:
> > > >
> > > >
On Fri, Jun 22, 2018 at 06:18:02PM +0200, Christoph Hellwig wrote:
> On Fri, Jun 22, 2018 at 05:28:50PM +0200, Christoph Hellwig wrote:
> > On Fri, Jun 22, 2018 at 04:14:09PM +0100, Al Viro wrote:
> > > >
> > > >
On Fri, Jun 22, 2018 at 7:01 AM Al Viro wrote:
>
> On Fri, Jun 22, 2018 at 12:00:14PM +0200, Christoph Hellwig wrote:
> > And a version with select() also covered:
>
> For fuck sake, if you want vfs_poll() inlined, *make* *it* *inlined*.
> Is there any reason for not doing that other than
On Fri, Jun 22, 2018 at 7:01 AM Al Viro wrote:
>
> On Fri, Jun 22, 2018 at 12:00:14PM +0200, Christoph Hellwig wrote:
> > And a version with select() also covered:
>
> For fuck sake, if you want vfs_poll() inlined, *make* *it* *inlined*.
> Is there any reason for not doing that other than
On Fri, Jun 22, 2018 at 05:28:50PM +0200, Christoph Hellwig wrote:
> On Fri, Jun 22, 2018 at 04:14:09PM +0100, Al Viro wrote:
> > >
> > > http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head
> >
> > See objections upthread re "fs,net: move poll busy loop
On Fri, Jun 22, 2018 at 05:28:50PM +0200, Christoph Hellwig wrote:
> On Fri, Jun 22, 2018 at 04:14:09PM +0100, Al Viro wrote:
> > >
> > > http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head
> >
> > See objections upthread re "fs,net: move poll busy loop
On Fri, Jun 22, 2018 at 04:14:09PM +0100, Al Viro wrote:
> >
> > http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head
>
> See objections upthread re "fs,net: move poll busy loop handling into a
> separate method"; as for the next one... I'd like an ACK from
On Fri, Jun 22, 2018 at 04:14:09PM +0100, Al Viro wrote:
> >
> > http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head
>
> See objections upthread re "fs,net: move poll busy loop handling into a
> separate method"; as for the next one... I'd like an ACK from
On Fri, Jun 22, 2018 at 05:02:51PM +0200, Christoph Hellwig wrote:
> Hi Xiaolong,
>
> can you retest this workload on the following branch:
>
> git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
>
> Gitweb:
>
>
>
On Fri, Jun 22, 2018 at 05:02:51PM +0200, Christoph Hellwig wrote:
> Hi Xiaolong,
>
> can you retest this workload on the following branch:
>
> git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
>
> Gitweb:
>
>
>
Hi Xiaolong,
can you retest this workload on the following branch:
git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
Gitweb:
http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head
Hi Xiaolong,
can you retest this workload on the following branch:
git://git.infradead.org/users/hch/vfs.git remove-get-poll-head
Gitweb:
http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/remove-get-poll-head
On Fri, Jun 22, 2018 at 02:33:07PM +0200, Christoph Hellwig wrote:
> On Fri, Jun 22, 2018 at 01:17:22PM +0100, Al Viro wrote:
> > > The problem is that call to sk_busy_loop(), which is going to be indirect
> > > no matter what.
> >
> > if ->f_poll_head is NULL {
> > use ->poll
> >
On Fri, Jun 22, 2018 at 02:33:07PM +0200, Christoph Hellwig wrote:
> On Fri, Jun 22, 2018 at 01:17:22PM +0100, Al Viro wrote:
> > > The problem is that call to sk_busy_loop(), which is going to be indirect
> > > no matter what.
> >
> > if ->f_poll_head is NULL {
> > use ->poll
> >
On Fri, Jun 22, 2018 at 01:17:22PM +0100, Al Viro wrote:
> > The problem is that call to sk_busy_loop(), which is going to be indirect
> > no matter what.
>
> if ->f_poll_head is NULL {
> use ->poll
> } else {
> if can ll_poll (checked in ->f_mode)
>
On Fri, Jun 22, 2018 at 01:17:22PM +0100, Al Viro wrote:
> > The problem is that call to sk_busy_loop(), which is going to be indirect
> > no matter what.
>
> if ->f_poll_head is NULL {
> use ->poll
> } else {
> if can ll_poll (checked in ->f_mode)
>
On Fri, Jun 22, 2018 at 02:07:39PM +0200, Christoph Hellwig wrote:
> On Fri, Jun 22, 2018 at 12:56:13PM +0100, Al Viro wrote:
> > So mark that in ->f_mode - I strongly suspect that
> > sk_can_busy_loop(sock->sk) can't change while an opened file is there.
> > And lift that (conditional on new
On Fri, Jun 22, 2018 at 02:07:39PM +0200, Christoph Hellwig wrote:
> On Fri, Jun 22, 2018 at 12:56:13PM +0100, Al Viro wrote:
> > So mark that in ->f_mode - I strongly suspect that
> > sk_can_busy_loop(sock->sk) can't change while an opened file is there.
> > And lift that (conditional on new
On Fri, Jun 22, 2018 at 12:56:13PM +0100, Al Viro wrote:
> So mark that in ->f_mode - I strongly suspect that
> sk_can_busy_loop(sock->sk) can't change while an opened file is there.
> And lift that (conditional on new FMODE_BUSY_LOOP) into do_poll()
> and do_select() - we *already* have
On Fri, Jun 22, 2018 at 12:56:13PM +0100, Al Viro wrote:
> So mark that in ->f_mode - I strongly suspect that
> sk_can_busy_loop(sock->sk) can't change while an opened file is there.
> And lift that (conditional on new FMODE_BUSY_LOOP) into do_poll()
> and do_select() - we *already* have
On Fri, Jun 22, 2018 at 01:53:00PM +0200, Christoph Hellwig wrote:
> > Now, ->sk_wq is modified only in sock_init_data() and sock_graft();
> > the latter, IIRC, is ->accept() helper. Do we ever call either of
> > those on a sock of already opened file? IOW, is there any real
> > reason for
On Fri, Jun 22, 2018 at 01:53:00PM +0200, Christoph Hellwig wrote:
> > Now, ->sk_wq is modified only in sock_init_data() and sock_graft();
> > the latter, IIRC, is ->accept() helper. Do we ever call either of
> > those on a sock of already opened file? IOW, is there any real
> > reason for
On Fri, Jun 22, 2018 at 12:01:17PM +0100, Al Viro wrote:
> For fuck sake, if you want vfs_poll() inlined, *make* *it* *inlined*.
That is not going to help with de-virtualizing _qproc, which was
the whole idea of that change. At least not without a compiler
way smarter than gcc.
But if you want
On Fri, Jun 22, 2018 at 12:01:17PM +0100, Al Viro wrote:
> For fuck sake, if you want vfs_poll() inlined, *make* *it* *inlined*.
That is not going to help with de-virtualizing _qproc, which was
the whole idea of that change. At least not without a compiler
way smarter than gcc.
But if you want
On Fri, Jun 22, 2018 at 12:00:14PM +0200, Christoph Hellwig wrote:
> And a version with select() also covered:
For fuck sake, if you want vfs_poll() inlined, *make* *it* *inlined*.
Is there any reason for not doing that other than EXPORT_SYMBOL_GPL
fetish? Because if there isn't, I would like to
On Fri, Jun 22, 2018 at 12:00:14PM +0200, Christoph Hellwig wrote:
> And a version with select() also covered:
For fuck sake, if you want vfs_poll() inlined, *make* *it* *inlined*.
Is there any reason for not doing that other than EXPORT_SYMBOL_GPL
fetish? Because if there isn't, I would like to
On Fri, Jun 22, 2018 at 7:02 PM Linus Torvalds
wrote:
>
> Get your act together. Don't uglify and slow down everything else just
> because you're concentrating only on aio.
.. and seriously, poll and select are timing-critical. There are many
real loads where they show up as *the* thing in
On Fri, Jun 22, 2018 at 7:02 PM Linus Torvalds
wrote:
>
> Get your act together. Don't uglify and slow down everything else just
> because you're concentrating only on aio.
.. and seriously, poll and select are timing-critical. There are many
real loads where they show up as *the* thing in
On Fri, Jun 22, 2018 at 6:46 PM Christoph Hellwig wrote:
>
> > The disadvantages are obvious: every poll event now causes *two*
> > indirect branches to the low-level filesystem or driver - one to get
> > he poll head, and one to get the mask. Add to that all the new "do we
> > have the new-style
On Fri, Jun 22, 2018 at 6:46 PM Christoph Hellwig wrote:
>
> > The disadvantages are obvious: every poll event now causes *two*
> > indirect branches to the low-level filesystem or driver - one to get
> > he poll head, and one to get the mask. Add to that all the new "do we
> > have the new-style
And a version with select() also covered:
---
>From 317159003ae28113cf759c632b161fb39192fe3c Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Fri, 22 Jun 2018 11:36:26 +0200
Subject: fs: optimize away ->_qproc indirection for poll_mask based polling
Signed-off-by: Christoph Hellwig
---
And a version with select() also covered:
---
>From 317159003ae28113cf759c632b161fb39192fe3c Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Fri, 22 Jun 2018 11:36:26 +0200
Subject: fs: optimize away ->_qproc indirection for poll_mask based polling
Signed-off-by: Christoph Hellwig
---
On Fri, Jun 22, 2018 at 06:25:45PM +0900, Linus Torvalds wrote:
> What was the alleged advantage of the new poll methods again? Because
> it sure isn't obvious - not from the numbers, and not from the commit
> messages.
The primary goal is that we can implement a race-free aio poll,
the primary
On Fri, Jun 22, 2018 at 06:25:45PM +0900, Linus Torvalds wrote:
> What was the alleged advantage of the new poll methods again? Because
> it sure isn't obvious - not from the numbers, and not from the commit
> messages.
The primary goal is that we can implement a race-free aio poll,
the primary
[ Added Al, since this all came in through his trees. The guilty
authors were already added by the robot ]
On Fri, Jun 22, 2018 at 5:31 PM kernel test robot wrote:
>
> FYI, we noticed a -8.8% regression of will-it-scale.per_process_ops due to
> commit:
Guys, this seems pretty big.
What was
[ Added Al, since this all came in through his trees. The guilty
authors were already added by the robot ]
On Fri, Jun 22, 2018 at 5:31 PM kernel test robot wrote:
>
> FYI, we noticed a -8.8% regression of will-it-scale.per_process_ops due to
> commit:
Guys, this seems pretty big.
What was
46 matches
Mail list logo