On 12/04/2014 09:28 AM, Jeff Layton wrote:
> On Thu, 04 Dec 2014 09:17:17 -0800
> Shirley Ma wrote:
>
>> > I am looking at how to reduce total RPC execution time in NFS/RDMA.
>> > mountstats output shows that RPC backlog wait is too long, but increasing
>> > the credit limit doesn't seem help.
On Thu, 04 Dec 2014 09:17:17 -0800
Shirley Ma wrote:
> I am looking at how to reduce total RPC execution time in NFS/RDMA.
> mountstats output shows that RPC backlog wait is too long, but increasing the
> credit limit doesn't seem help. Would this patchset help reducing total RPC
> execution
I am looking at how to reduce total RPC execution time in NFS/RDMA. mountstats
output shows that RPC backlog wait is too long, but increasing the credit limit
doesn't seem help. Would this patchset help reducing total RPC execution time?
Shirley
On 12/04/2014 03:47 AM, Jeff Layton wrote:
> I
On Wed, 3 Dec 2014 15:44:31 -0500
Trond Myklebust wrote:
> On Wed, Dec 3, 2014 at 3:21 PM, Jeff Layton
> wrote:
> > On Wed, 3 Dec 2014 14:59:43 -0500
> > Trond Myklebust wrote:
> >
> >> On Wed, Dec 3, 2014 at 2:20 PM, Jeff Layton
> >> wrote:
> >> > On Wed, 3 Dec 2014 14:08:01 -0500
> >> >
On Wed, 3 Dec 2014 15:44:31 -0500
Trond Myklebust trond.mykleb...@primarydata.com wrote:
On Wed, Dec 3, 2014 at 3:21 PM, Jeff Layton jeff.lay...@primarydata.com
wrote:
On Wed, 3 Dec 2014 14:59:43 -0500
Trond Myklebust trond.mykleb...@primarydata.com wrote:
On Wed, Dec 3, 2014 at 2:20
I am looking at how to reduce total RPC execution time in NFS/RDMA. mountstats
output shows that RPC backlog wait is too long, but increasing the credit limit
doesn't seem help. Would this patchset help reducing total RPC execution time?
Shirley
On 12/04/2014 03:47 AM, Jeff Layton wrote:
I
On Thu, 04 Dec 2014 09:17:17 -0800
Shirley Ma shirley...@oracle.com wrote:
I am looking at how to reduce total RPC execution time in NFS/RDMA.
mountstats output shows that RPC backlog wait is too long, but increasing the
credit limit doesn't seem help. Would this patchset help reducing total
On 12/04/2014 09:28 AM, Jeff Layton wrote:
On Thu, 04 Dec 2014 09:17:17 -0800
Shirley Ma shirley...@oracle.com wrote:
I am looking at how to reduce total RPC execution time in NFS/RDMA.
mountstats output shows that RPC backlog wait is too long, but increasing
the credit limit doesn't
On Wed, Dec 3, 2014 at 3:21 PM, Jeff Layton wrote:
> On Wed, 3 Dec 2014 14:59:43 -0500
> Trond Myklebust wrote:
>
>> On Wed, Dec 3, 2014 at 2:20 PM, Jeff Layton
>> wrote:
>> > On Wed, 3 Dec 2014 14:08:01 -0500
>> > Trond Myklebust wrote:
>> >> Which workqueue are you using? Since the receive
On Wed, 3 Dec 2014 14:59:43 -0500
Trond Myklebust wrote:
> On Wed, Dec 3, 2014 at 2:20 PM, Jeff Layton
> wrote:
> > On Wed, 3 Dec 2014 14:08:01 -0500
> > Trond Myklebust wrote:
> >> Which workqueue are you using? Since the receive code is non-blocking,
> >> I'd expect you might be able to use
On Wed, Dec 3, 2014 at 2:20 PM, Jeff Layton wrote:
> On Wed, 3 Dec 2014 14:08:01 -0500
> Trond Myklebust wrote:
>> Which workqueue are you using? Since the receive code is non-blocking,
>> I'd expect you might be able to use rpciod, for the initial socket
>> reads, but you wouldn't want to use
On Wed, 3 Dec 2014 14:08:01 -0500
Trond Myklebust wrote:
> On Wed, Dec 3, 2014 at 2:02 PM, Jeff Layton
> wrote:
> > On Wed, 3 Dec 2014 11:04:05 -0500
> > Jeff Layton wrote:
> >
> >> On Wed, 3 Dec 2014 10:56:49 -0500
> >> Tejun Heo wrote:
> >>
> >> > Hello, Neil, Jeff.
> >> >
> >> > On Tue,
On Wed, Dec 3, 2014 at 2:02 PM, Jeff Layton wrote:
> On Wed, 3 Dec 2014 11:04:05 -0500
> Jeff Layton wrote:
>
>> On Wed, 3 Dec 2014 10:56:49 -0500
>> Tejun Heo wrote:
>>
>> > Hello, Neil, Jeff.
>> >
>> > On Tue, Dec 02, 2014 at 08:29:46PM -0500, Jeff Layton wrote:
>> > > That's a good point. I
On Wed, 3 Dec 2014 11:04:05 -0500
Jeff Layton wrote:
> On Wed, 3 Dec 2014 10:56:49 -0500
> Tejun Heo wrote:
>
> > Hello, Neil, Jeff.
> >
> > On Tue, Dec 02, 2014 at 08:29:46PM -0500, Jeff Layton wrote:
> > > That's a good point. I had originally thought that max_active on an
> > > unbound
On Dec 3, 2014, at 10:56 AM, Tejun Heo wrote:
> Hello, Neil, Jeff.
>
> On Tue, Dec 02, 2014 at 08:29:46PM -0500, Jeff Layton wrote:
>> That's a good point. I had originally thought that max_active on an
>> unbound workqueue would be the number of concurrent jobs that could run
>> across all
On Wed, 3 Dec 2014 10:56:49 -0500
Tejun Heo wrote:
> Hello, Neil, Jeff.
>
> On Tue, Dec 02, 2014 at 08:29:46PM -0500, Jeff Layton wrote:
> > That's a good point. I had originally thought that max_active on an
> > unbound workqueue would be the number of concurrent jobs that could run
> > across
Hello, Neil, Jeff.
On Tue, Dec 02, 2014 at 08:29:46PM -0500, Jeff Layton wrote:
> That's a good point. I had originally thought that max_active on an
> unbound workqueue would be the number of concurrent jobs that could run
> across all the CPUs, but now that I look I'm not sure that's really
>
Hello, Neil, Jeff.
On Tue, Dec 02, 2014 at 08:29:46PM -0500, Jeff Layton wrote:
That's a good point. I had originally thought that max_active on an
unbound workqueue would be the number of concurrent jobs that could run
across all the CPUs, but now that I look I'm not sure that's really
the
On Wed, 3 Dec 2014 10:56:49 -0500
Tejun Heo t...@kernel.org wrote:
Hello, Neil, Jeff.
On Tue, Dec 02, 2014 at 08:29:46PM -0500, Jeff Layton wrote:
That's a good point. I had originally thought that max_active on an
unbound workqueue would be the number of concurrent jobs that could run
On Dec 3, 2014, at 10:56 AM, Tejun Heo t...@kernel.org wrote:
Hello, Neil, Jeff.
On Tue, Dec 02, 2014 at 08:29:46PM -0500, Jeff Layton wrote:
That's a good point. I had originally thought that max_active on an
unbound workqueue would be the number of concurrent jobs that could run
across
On Wed, 3 Dec 2014 11:04:05 -0500
Jeff Layton jlay...@primarydata.com wrote:
On Wed, 3 Dec 2014 10:56:49 -0500
Tejun Heo t...@kernel.org wrote:
Hello, Neil, Jeff.
On Tue, Dec 02, 2014 at 08:29:46PM -0500, Jeff Layton wrote:
That's a good point. I had originally thought that
On Wed, Dec 3, 2014 at 2:02 PM, Jeff Layton jeff.lay...@primarydata.com wrote:
On Wed, 3 Dec 2014 11:04:05 -0500
Jeff Layton jlay...@primarydata.com wrote:
On Wed, 3 Dec 2014 10:56:49 -0500
Tejun Heo t...@kernel.org wrote:
Hello, Neil, Jeff.
On Tue, Dec 02, 2014 at 08:29:46PM -0500,
On Wed, 3 Dec 2014 14:08:01 -0500
Trond Myklebust trond.mykleb...@primarydata.com wrote:
On Wed, Dec 3, 2014 at 2:02 PM, Jeff Layton jeff.lay...@primarydata.com
wrote:
On Wed, 3 Dec 2014 11:04:05 -0500
Jeff Layton jlay...@primarydata.com wrote:
On Wed, 3 Dec 2014 10:56:49 -0500
Tejun
On Wed, Dec 3, 2014 at 2:20 PM, Jeff Layton jeff.lay...@primarydata.com wrote:
On Wed, 3 Dec 2014 14:08:01 -0500
Trond Myklebust trond.mykleb...@primarydata.com wrote:
Which workqueue are you using? Since the receive code is non-blocking,
I'd expect you might be able to use rpciod, for the
On Wed, 3 Dec 2014 14:59:43 -0500
Trond Myklebust trond.mykleb...@primarydata.com wrote:
On Wed, Dec 3, 2014 at 2:20 PM, Jeff Layton jeff.lay...@primarydata.com
wrote:
On Wed, 3 Dec 2014 14:08:01 -0500
Trond Myklebust trond.mykleb...@primarydata.com wrote:
Which workqueue are you using?
On Wed, Dec 3, 2014 at 3:21 PM, Jeff Layton jeff.lay...@primarydata.com wrote:
On Wed, 3 Dec 2014 14:59:43 -0500
Trond Myklebust trond.mykleb...@primarydata.com wrote:
On Wed, Dec 3, 2014 at 2:20 PM, Jeff Layton jeff.lay...@primarydata.com
wrote:
On Wed, 3 Dec 2014 14:08:01 -0500
Trond
On Wed, 3 Dec 2014 12:11:18 +1100
NeilBrown wrote:
> On Tue, 2 Dec 2014 13:24:09 -0500 Jeff Layton
> wrote:
>
> > tl;dr: this code works and is much simpler than the dedicated thread
> >pool, but there are some latencies in the workqueue code that
> >seem to keep it from being
On Tue, 2 Dec 2014 13:24:09 -0500 Jeff Layton
wrote:
> tl;dr: this code works and is much simpler than the dedicated thread
>pool, but there are some latencies in the workqueue code that
>seem to keep it from being as fast as it could be.
>
> This patchset is a little
On Tue, 2 Dec 2014 14:26:55 -0500
Tejun Heo wrote:
> On Tue, Dec 02, 2014 at 02:18:14PM -0500, Tejun Heo wrote:
> ...
> > unbound. If strict cpu locality is likely to be beneficial and each
> > work item isn't likely to consume huge amount of cpu cycles,
> > WQ_CPU_INTENSIVE would fit better;
On Tue, Dec 02, 2014 at 02:18:14PM -0500, Tejun Heo wrote:
...
> unbound. If strict cpu locality is likely to be beneficial and each
> work item isn't likely to consume huge amount of cpu cycles,
> WQ_CPU_INTENSIVE would fit better; otherwise, WQ_UNBOUND to let the
> scheduler do its thing.
Hello, Jeff.
On Tue, Dec 02, 2014 at 02:26:27PM -0500, Jeff Layton wrote:
> I'm already using WQ_UNBOUND workqueues. If that exempts this code from
> the concurrency management, then that's probably not the problem. The
> jobs here aren't terribly CPU intensive, but they can sleep for a long
>
On Tue, 2 Dec 2014 14:18:14 -0500
Tejun Heo wrote:
> Hello, Jeff.
>
> On Tue, Dec 02, 2014 at 01:24:09PM -0500, Jeff Layton wrote:
> > 2) get some insight about the latency from those with a better
> > understanding of the CMWQ code. Any thoughts as to why we might be
> > seeing such high
Hello, Jeff.
On Tue, Dec 02, 2014 at 01:24:09PM -0500, Jeff Layton wrote:
> 2) get some insight about the latency from those with a better
> understanding of the CMWQ code. Any thoughts as to why we might be
> seeing such high latency here? Any ideas of what we can do about it?
The latency is
tl;dr: this code works and is much simpler than the dedicated thread
pool, but there are some latencies in the workqueue code that
seem to keep it from being as fast as it could be.
This patchset is a little skunkworks project that I've been poking at
for the last few weeks.
tl;dr: this code works and is much simpler than the dedicated thread
pool, but there are some latencies in the workqueue code that
seem to keep it from being as fast as it could be.
This patchset is a little skunkworks project that I've been poking at
for the last few weeks.
Hello, Jeff.
On Tue, Dec 02, 2014 at 01:24:09PM -0500, Jeff Layton wrote:
2) get some insight about the latency from those with a better
understanding of the CMWQ code. Any thoughts as to why we might be
seeing such high latency here? Any ideas of what we can do about it?
The latency is
On Tue, 2 Dec 2014 14:18:14 -0500
Tejun Heo t...@kernel.org wrote:
Hello, Jeff.
On Tue, Dec 02, 2014 at 01:24:09PM -0500, Jeff Layton wrote:
2) get some insight about the latency from those with a better
understanding of the CMWQ code. Any thoughts as to why we might be
seeing such high
Hello, Jeff.
On Tue, Dec 02, 2014 at 02:26:27PM -0500, Jeff Layton wrote:
I'm already using WQ_UNBOUND workqueues. If that exempts this code from
the concurrency management, then that's probably not the problem. The
jobs here aren't terribly CPU intensive, but they can sleep for a long
time
On Tue, Dec 02, 2014 at 02:18:14PM -0500, Tejun Heo wrote:
...
unbound. If strict cpu locality is likely to be beneficial and each
work item isn't likely to consume huge amount of cpu cycles,
WQ_CPU_INTENSIVE would fit better; otherwise, WQ_UNBOUND to let the
scheduler do its thing.
Hmmm...
On Tue, 2 Dec 2014 14:26:55 -0500
Tejun Heo t...@kernel.org wrote:
On Tue, Dec 02, 2014 at 02:18:14PM -0500, Tejun Heo wrote:
...
unbound. If strict cpu locality is likely to be beneficial and each
work item isn't likely to consume huge amount of cpu cycles,
WQ_CPU_INTENSIVE would fit
On Tue, 2 Dec 2014 13:24:09 -0500 Jeff Layton jlay...@primarydata.com
wrote:
tl;dr: this code works and is much simpler than the dedicated thread
pool, but there are some latencies in the workqueue code that
seem to keep it from being as fast as it could be.
This patchset is
On Wed, 3 Dec 2014 12:11:18 +1100
NeilBrown ne...@suse.de wrote:
On Tue, 2 Dec 2014 13:24:09 -0500 Jeff Layton jlay...@primarydata.com
wrote:
tl;dr: this code works and is much simpler than the dedicated thread
pool, but there are some latencies in the workqueue code that
42 matches
Mail list logo