On Mon, Feb 10, 2014 at 04:06:27PM -0700, Jens Axboe wrote:
> It obviously all depends on the access pattern. X threads for X tags
> would work perfectly well with per-cpu tagging, if they are doing
> sync IO. And similarly, 8 threads each having low queue depth would
> be fine. However, it all
On Mon, Feb 10, 2014 at 04:06:27PM -0700, Jens Axboe wrote:
It obviously all depends on the access pattern. X threads for X tags
would work perfectly well with per-cpu tagging, if they are doing
sync IO. And similarly, 8 threads each having low queue depth would
be fine. However, it all falls
On Tue, Feb 11, 2014 at 06:42:40AM -0800, James Bottomley wrote:
> > Unfortunately that's not true in SCSI land, where most driver do per-lun
> > tagging, and the the cmd_per_lun values are very low and very often
> > single digits, as a simple grep for cmd_per_lun will tell.
>
> Remember we do
On Tue, 2014-02-11 at 01:12 -0800, Christoph Hellwig wrote:
> On Mon, Feb 10, 2014 at 04:06:27PM -0700, Jens Axboe wrote:
> > For the common case, I'd assume that anywhere between 31..256 tags
> > is "normal". That's where the majority of devices will end up being,
> > largely. So single digits
On Mon, Feb 10, 2014 at 04:06:27PM -0700, Jens Axboe wrote:
> For the common case, I'd assume that anywhere between 31..256 tags
> is "normal". That's where the majority of devices will end up being,
> largely. So single digits would be an anomaly.
Unfortunately that's not true in SCSI land,
On Mon, Feb 10, 2014 at 04:06:27PM -0700, Jens Axboe wrote:
For the common case, I'd assume that anywhere between 31..256 tags
is normal. That's where the majority of devices will end up being,
largely. So single digits would be an anomaly.
Unfortunately that's not true in SCSI land, where
On Tue, 2014-02-11 at 01:12 -0800, Christoph Hellwig wrote:
On Mon, Feb 10, 2014 at 04:06:27PM -0700, Jens Axboe wrote:
For the common case, I'd assume that anywhere between 31..256 tags
is normal. That's where the majority of devices will end up being,
largely. So single digits would be an
On Tue, Feb 11, 2014 at 06:42:40AM -0800, James Bottomley wrote:
Unfortunately that's not true in SCSI land, where most driver do per-lun
tagging, and the the cmd_per_lun values are very low and very often
single digits, as a simple grep for cmd_per_lun will tell.
Remember we do shared
On 02/10/2014 03:41 PM, Kent Overstreet wrote:
On Mon, Feb 10, 2014 at 09:26:15AM -0700, Jens Axboe wrote:
On 02/10/2014 03:32 AM, Christoph Hellwig wrote:
On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
Yeah, that was my first thought when I posted "percpu_ida: Allow
On Mon, Feb 10, 2014 at 09:26:15AM -0700, Jens Axboe wrote:
>
>
> On 02/10/2014 03:32 AM, Christoph Hellwig wrote:
> >On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
> >>Yeah, that was my first thought when I posted "percpu_ida: Allow variable
> >>maximum number of cached
On 02/10/2014 03:32 AM, Christoph Hellwig wrote:
On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
Yeah, that was my first thought when I posted "percpu_ida: Allow variable
maximum number of cached tags" patch some few months ago. But I am back-
pedalling as it does not
On Mon, Feb 10, 2014 at 04:49:17PM +0100, Alexander Gordeev wrote:
> > Do we really always need the pool for these classes of devices?
> >
> > Pulling tags from local caches to the pool just to (near to) dry it at
> > the very next iteration does not seem beneficial. Not to mention caches
> > vs
On Mon, Feb 10, 2014 at 01:29:42PM +0100, Alexander Gordeev wrote:
> > We'll defintively need a fix to be able to allow the whole tag space.
> > For large numbers of tags per device the flush might work, but for
> > devices with low number of tags we need something more efficient. The
> > case of
On Mon, Feb 10, 2014 at 02:32:11AM -0800, Christoph Hellwig wrote:
> > May be we can walk off with a per-cpu timeout that flushes batch nr of tags
> > from local caches to the pool? Each local allocation would restart the
> > timer,
> > but once allocation requests stopped coming on a CPU the
On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
> Yeah, that was my first thought when I posted "percpu_ida: Allow variable
> maximum number of cached tags" patch some few months ago. But I am back-
> pedalling as it does not appear solves the fundamental problem - what is the
>
On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
Yeah, that was my first thought when I posted percpu_ida: Allow variable
maximum number of cached tags patch some few months ago. But I am back-
pedalling as it does not appear solves the fundamental problem - what is the
best
On Mon, Feb 10, 2014 at 02:32:11AM -0800, Christoph Hellwig wrote:
May be we can walk off with a per-cpu timeout that flushes batch nr of tags
from local caches to the pool? Each local allocation would restart the
timer,
but once allocation requests stopped coming on a CPU the tags would
On Mon, Feb 10, 2014 at 01:29:42PM +0100, Alexander Gordeev wrote:
We'll defintively need a fix to be able to allow the whole tag space.
For large numbers of tags per device the flush might work, but for
devices with low number of tags we need something more efficient. The
case of less
On Mon, Feb 10, 2014 at 04:49:17PM +0100, Alexander Gordeev wrote:
Do we really always need the pool for these classes of devices?
Pulling tags from local caches to the pool just to (near to) dry it at
the very next iteration does not seem beneficial. Not to mention caches
vs pool
On 02/10/2014 03:32 AM, Christoph Hellwig wrote:
On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
Yeah, that was my first thought when I posted percpu_ida: Allow variable
maximum number of cached tags patch some few months ago. But I am back-
pedalling as it does not appear
On Mon, Feb 10, 2014 at 09:26:15AM -0700, Jens Axboe wrote:
On 02/10/2014 03:32 AM, Christoph Hellwig wrote:
On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
Yeah, that was my first thought when I posted percpu_ida: Allow variable
maximum number of cached tags patch some
On 02/10/2014 03:41 PM, Kent Overstreet wrote:
On Mon, Feb 10, 2014 at 09:26:15AM -0700, Jens Axboe wrote:
On 02/10/2014 03:32 AM, Christoph Hellwig wrote:
On Sun, Feb 09, 2014 at 04:50:07PM +0100, Alexander Gordeev wrote:
Yeah, that was my first thought when I posted percpu_ida: Allow
On Mon, Jan 06, 2014 at 01:47:26PM -0800, Kent Overstreet wrote:
> Ok, so I hadn't really given any thought to that kind of use case; insofar as
> I
> had I would've been skeptical percpu tag allocation made sense for 32
> different
> tags at all.
>
> We really don't want to screw over the
On Mon, Jan 06, 2014 at 01:47:26PM -0800, Kent Overstreet wrote:
Ok, so I hadn't really given any thought to that kind of use case; insofar as
I
had I would've been skeptical percpu tag allocation made sense for 32
different
tags at all.
We really don't want to screw over the users that
On Mon, Jan 06, 2014 at 01:52:19PM -0700, Jens Axboe wrote:
> On 01/06/2014 01:46 PM, Kent Overstreet wrote:
> > On Sun, Jan 05, 2014 at 09:13:00PM +0800, Shaohua Li wrote:
>
> >>> - we explicitly don't guarantee that all
> >>> the tags will be available for allocation at any given time, only
On 01/06/2014 01:46 PM, Kent Overstreet wrote:
> On Sun, Jan 05, 2014 at 09:13:00PM +0800, Shaohua Li wrote:
>>> - we explicitly don't guarantee that all
>>> the tags will be available for allocation at any given time, only half
>>> of them.
>>
>> only half of the tags can be used? this is
On Sun, Jan 05, 2014 at 09:13:00PM +0800, Shaohua Li wrote:
> On Sat, Jan 04, 2014 at 01:08:04PM -0800, Kent Overstreet wrote:
> > On Tue, Dec 31, 2013 at 11:38:27AM +0800, Shaohua Li wrote:
> > >
> > > steal_tags only happens when free tags is more than half of the total
> > > tags.
> > > This
On Sun, Jan 05, 2014 at 09:13:00PM +0800, Shaohua Li wrote:
On Sat, Jan 04, 2014 at 01:08:04PM -0800, Kent Overstreet wrote:
On Tue, Dec 31, 2013 at 11:38:27AM +0800, Shaohua Li wrote:
steal_tags only happens when free tags is more than half of the total
tags.
This is too restrict
On 01/06/2014 01:46 PM, Kent Overstreet wrote:
On Sun, Jan 05, 2014 at 09:13:00PM +0800, Shaohua Li wrote:
- we explicitly don't guarantee that all
the tags will be available for allocation at any given time, only half
of them.
only half of the tags can be used? this is scaring. Of course
On Mon, Jan 06, 2014 at 01:52:19PM -0700, Jens Axboe wrote:
On 01/06/2014 01:46 PM, Kent Overstreet wrote:
On Sun, Jan 05, 2014 at 09:13:00PM +0800, Shaohua Li wrote:
- we explicitly don't guarantee that all
the tags will be available for allocation at any given time, only half
of them.
On Sat, Jan 04, 2014 at 01:08:04PM -0800, Kent Overstreet wrote:
> On Tue, Dec 31, 2013 at 11:38:27AM +0800, Shaohua Li wrote:
> >
> > steal_tags only happens when free tags is more than half of the total tags.
> > This is too restrict and can cause live lock. I found one cpu has free tags,
> >
On Sat, Jan 04, 2014 at 01:08:04PM -0800, Kent Overstreet wrote:
On Tue, Dec 31, 2013 at 11:38:27AM +0800, Shaohua Li wrote:
steal_tags only happens when free tags is more than half of the total tags.
This is too restrict and can cause live lock. I found one cpu has free tags,
but other
On Tue, Dec 31, 2013 at 11:38:27AM +0800, Shaohua Li wrote:
>
> steal_tags only happens when free tags is more than half of the total tags.
> This is too restrict and can cause live lock. I found one cpu has free tags,
> but other cpu can't steal (thread is bound to specific cpus), threads which
On Tue, Dec 31, 2013 at 11:38:27AM +0800, Shaohua Li wrote:
steal_tags only happens when free tags is more than half of the total tags.
This is too restrict and can cause live lock. I found one cpu has free tags,
but other cpu can't steal (thread is bound to specific cpus), threads which
34 matches
Mail list logo