Re: [PATCH 1/6] lib: Implement range locks

2013-02-11 Thread Jan Kara
On Mon 11-02-13 03:03:30, Michel Lespinasse wrote:
> On Mon, Feb 11, 2013 at 2:27 AM, Jan Kara  wrote:
> > On Sun 10-02-13 21:42:32, Michel Lespinasse wrote:
> >> On Thu, Jan 31, 2013 at 1:49 PM, Jan Kara  wrote:
> >> > +void range_lock_init(struct range_lock *lock, unsigned long start,
> >> > +unsigned long end);
> >> > +void range_lock(struct range_lock_tree *tree, struct range_lock *lock);
> >> > +void range_unlock(struct range_lock_tree *tree, struct range_lock 
> >> > *lock);
> >>
> >> Is there a point to separating the init and lock stages ? maybe the API 
> >> could be
> >> void range_lock(struct range_lock_tree *tree, struct range_lock *lock,
> >> unsigned long start, unsigned long last);
> >> void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
> >   I was thinking about this as well. Currently I don't have a place which
> > would make it beneficial to separate _init and _lock but I can imagine such
> > uses (where you don't want to pass the interval information down the stack
> > and it's easier to pass the whole lock structure). Also it looks a bit
> > confusing to pass (tree, lock, start, last) to the locking functon. So I
> > left it there.
> >
> > OTOH I had to somewhat change the API so that the locking phase is now
> > separated in "lock_prep" phase which inserts the node into the tree and
> > counts blocking ranges and "wait" phase which waits for the blocking ranges
> > to unlock. The reason for this split is that while "lock_prep" needs to
> > happen under some lock synchronizing operations on the tree, "wait" phase
> > can be easily lockless. So this allows me to remove the knowledge of how
> > operations on the tree are synchronized from range locking code itself.
> > That further allowed me to use mapping->tree_lock for synchronization and
> > basically reduce the cost of mapping range locking close to 0 for buffered
> > IO (just a single tree lookup in the tree in the fast path).
> 
> Ah yes, being able to externalize the lock is good.
> 
> I think in this case, it makes the most sense for lock_prep phase to
> also initialize the lock node, though.
  I guess so.

> >> Reviewed-by: Michel Lespinasse 
> >   I actually didn't add this because there are some differences in the
> > current version...
> 
> Did I miss another posting of yours, or is that coming up ?
  That will come. But as Dave Chinner pointed out for buffered writes we
should rather lock the whole range specified in the syscall (to avoid
strange results of racing truncate / write when i_mutex isn't used) and
that requires us to put the range lock above mmap_sem which isn't currently
easily possible due to page fault handling... So if the whole patch set
should go anywhere I need to solve that somehow.

Honza
-- 
Jan Kara 
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/6] lib: Implement range locks

2013-02-11 Thread Michel Lespinasse
On Mon, Feb 11, 2013 at 2:27 AM, Jan Kara  wrote:
> On Sun 10-02-13 21:42:32, Michel Lespinasse wrote:
>> On Thu, Jan 31, 2013 at 1:49 PM, Jan Kara  wrote:
>> > +void range_lock_init(struct range_lock *lock, unsigned long start,
>> > +unsigned long end);
>> > +void range_lock(struct range_lock_tree *tree, struct range_lock *lock);
>> > +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
>>
>> Is there a point to separating the init and lock stages ? maybe the API 
>> could be
>> void range_lock(struct range_lock_tree *tree, struct range_lock *lock,
>> unsigned long start, unsigned long last);
>> void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
>   I was thinking about this as well. Currently I don't have a place which
> would make it beneficial to separate _init and _lock but I can imagine such
> uses (where you don't want to pass the interval information down the stack
> and it's easier to pass the whole lock structure). Also it looks a bit
> confusing to pass (tree, lock, start, last) to the locking functon. So I
> left it there.
>
> OTOH I had to somewhat change the API so that the locking phase is now
> separated in "lock_prep" phase which inserts the node into the tree and
> counts blocking ranges and "wait" phase which waits for the blocking ranges
> to unlock. The reason for this split is that while "lock_prep" needs to
> happen under some lock synchronizing operations on the tree, "wait" phase
> can be easily lockless. So this allows me to remove the knowledge of how
> operations on the tree are synchronized from range locking code itself.
> That further allowed me to use mapping->tree_lock for synchronization and
> basically reduce the cost of mapping range locking close to 0 for buffered
> IO (just a single tree lookup in the tree in the fast path).

Ah yes, being able to externalize the lock is good.

I think in this case, it makes the most sense for lock_prep phase to
also initialize the lock node, though.

>> Reviewed-by: Michel Lespinasse 
>   I actually didn't add this because there are some differences in the
> current version...

Did I miss another posting of yours, or is that coming up ?

Cheers,

-- 
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/6] lib: Implement range locks

2013-02-11 Thread Jan Kara
On Sun 10-02-13 21:42:32, Michel Lespinasse wrote:
> Hi Jan,
> 
> On Thu, Jan 31, 2013 at 1:49 PM, Jan Kara  wrote:
> > Implement range locking using interval tree.
> 
> Yay! I like to see interval trees being put to good use.
  Yeah, you saved me some coding of interval tree implementation :) The
code I originally planned would be slightly more efficient I think but
yours is far more simpler.

> > +/*
> > + * Range locking
> > + *
> > + * We allow exclusive locking of arbitrary ranges. We guarantee that each
> > + * range is locked only after all conflicting range locks requested 
> > previously
> > + * have been unlocked. Thus we achieve fairness and avoid livelocks.
> > + *
> > + * The cost of lock and unlock of a range is O(log(R_all)+R_int) where 
> > R_all is
> > + * total number of ranges and R_int is the number of ranges intersecting 
> > the
> > + * operated range.
> > + */
> 
> I think the cost is actually O((1+R_int)log(R_all)) as each
> interval_tree_iter_{first,next} call is O(log(R_all))
  Right. I'll fix that in the comment.
 
> Not that it'll make a huge difference in practice - the cost will be
> cheap enough either way.
> 
> > +struct range_lock {
> > +   struct interval_tree_node node;
> > +   struct task_struct *task;
> > +   /* Number of ranges which are blocking acquisition of the lock */
> s/ranges/previously requested ranges/
> 
> I think it's worth writing this down as I originally found this confusing.
> 
> BTW, I like how you only count previously requested ranges in order to
> guarantee fairness. This was absolutely not obvious to me.
  OK, I'll update the comment.

> > +#define RANGE_LOCK_INITIALIZER(start, end) {\
> > +   .node = {\
> > +   .start = (start),\
> > +   .end = (end)\
> > +   }\
> > +}
> 
> I have not found any uses of this, but it seems it wouldn't work as
> you want .last instead of .end
  I'll just delete it I guess.

> BTW, it's important to make it clear that last is the last value that
> is *included* in the interval, not the first value that follows it.
  In current versions I have this noted at function definitions.

> > +void range_lock_init(struct range_lock *lock, unsigned long start,
> > +unsigned long end);
> > +void range_lock(struct range_lock_tree *tree, struct range_lock *lock);
> > +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
> 
> Is there a point to separating the init and lock stages ? maybe the API could 
> be
> void range_lock(struct range_lock_tree *tree, struct range_lock *lock,
> unsigned long start, unsigned long last);
> void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
  I was thinking about this as well. Currently I don't have a place which
would make it beneficial to separate _init and _lock but I can imagine such
uses (where you don't want to pass the interval information down the stack
and it's easier to pass the whole lock structure). Also it looks a bit
confusing to pass (tree, lock, start, last) to the locking functon. So I
left it there. 

OTOH I had to somewhat change the API so that the locking phase is now
separated in "lock_prep" phase which inserts the node into the tree and
counts blocking ranges and "wait" phase which waits for the blocking ranges
to unlock. The reason for this split is that while "lock_prep" needs to
happen under some lock synchronizing operations on the tree, "wait" phase
can be easily lockless. So this allows me to remove the knowledge of how
operations on the tree are synchronized from range locking code itself.
That further allowed me to use mapping->tree_lock for synchronization and
basically reduce the cost of mapping range locking close to 0 for buffered
IO (just a single tree lookup in the tree in the fast path).

So maybe we want to reduce the number of calls for locking from 3 to 2 by
removing the _init phase. I'm not really decided as for mapping range lock
itself, the lock operation is squashed into 1 call anyway and we don't have
other users now...

> (I changed end to last because I think end makes it sound like it's
> the first value after the interval, while last makes it clear that
> it's the last value in the interval)
  This may be a useful change. I'll use that I think.

> > +/*
> > + * Implementation of range locks.
> > + *
> > + * We keep interval tree of locked and to-be-locked ranges. When new range 
> > lock
> > + * is requested, we add its interval to the tree and store number of 
> > intervals
> > + * intersecting it to 'blocking_ranges'.
> > + *
> > + * When a range is unlocked, we again walk intervals that intersect with 
> > the
> > + * unlocked one and decrement their 'blocking_ranges'.  We wake up owner 
> > of any
> > + * range lock whose 'blocking_ranges' drops to 0.
> > + */
> 
> May be worth repeating the comment about how this achieves fairness
> and avoids livelocks.
  Good idea. Added.

> > +void range_lock_init(struct range_lock *lock, unsigned 

Re: [PATCH 1/6] lib: Implement range locks

2013-02-11 Thread Jan Kara
On Sun 10-02-13 21:42:32, Michel Lespinasse wrote:
 Hi Jan,
 
 On Thu, Jan 31, 2013 at 1:49 PM, Jan Kara j...@suse.cz wrote:
  Implement range locking using interval tree.
 
 Yay! I like to see interval trees being put to good use.
  Yeah, you saved me some coding of interval tree implementation :) The
code I originally planned would be slightly more efficient I think but
yours is far more simpler.

  +/*
  + * Range locking
  + *
  + * We allow exclusive locking of arbitrary ranges. We guarantee that each
  + * range is locked only after all conflicting range locks requested 
  previously
  + * have been unlocked. Thus we achieve fairness and avoid livelocks.
  + *
  + * The cost of lock and unlock of a range is O(log(R_all)+R_int) where 
  R_all is
  + * total number of ranges and R_int is the number of ranges intersecting 
  the
  + * operated range.
  + */
 
 I think the cost is actually O((1+R_int)log(R_all)) as each
 interval_tree_iter_{first,next} call is O(log(R_all))
  Right. I'll fix that in the comment.
 
 Not that it'll make a huge difference in practice - the cost will be
 cheap enough either way.
 
  +struct range_lock {
  +   struct interval_tree_node node;
  +   struct task_struct *task;
  +   /* Number of ranges which are blocking acquisition of the lock */
 s/ranges/previously requested ranges/
 
 I think it's worth writing this down as I originally found this confusing.
 
 BTW, I like how you only count previously requested ranges in order to
 guarantee fairness. This was absolutely not obvious to me.
  OK, I'll update the comment.

  +#define RANGE_LOCK_INITIALIZER(start, end) {\
  +   .node = {\
  +   .start = (start),\
  +   .end = (end)\
  +   }\
  +}
 
 I have not found any uses of this, but it seems it wouldn't work as
 you want .last instead of .end
  I'll just delete it I guess.

 BTW, it's important to make it clear that last is the last value that
 is *included* in the interval, not the first value that follows it.
  In current versions I have this noted at function definitions.

  +void range_lock_init(struct range_lock *lock, unsigned long start,
  +unsigned long end);
  +void range_lock(struct range_lock_tree *tree, struct range_lock *lock);
  +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
 
 Is there a point to separating the init and lock stages ? maybe the API could 
 be
 void range_lock(struct range_lock_tree *tree, struct range_lock *lock,
 unsigned long start, unsigned long last);
 void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
  I was thinking about this as well. Currently I don't have a place which
would make it beneficial to separate _init and _lock but I can imagine such
uses (where you don't want to pass the interval information down the stack
and it's easier to pass the whole lock structure). Also it looks a bit
confusing to pass (tree, lock, start, last) to the locking functon. So I
left it there. 

OTOH I had to somewhat change the API so that the locking phase is now
separated in lock_prep phase which inserts the node into the tree and
counts blocking ranges and wait phase which waits for the blocking ranges
to unlock. The reason for this split is that while lock_prep needs to
happen under some lock synchronizing operations on the tree, wait phase
can be easily lockless. So this allows me to remove the knowledge of how
operations on the tree are synchronized from range locking code itself.
That further allowed me to use mapping-tree_lock for synchronization and
basically reduce the cost of mapping range locking close to 0 for buffered
IO (just a single tree lookup in the tree in the fast path).

So maybe we want to reduce the number of calls for locking from 3 to 2 by
removing the _init phase. I'm not really decided as for mapping range lock
itself, the lock operation is squashed into 1 call anyway and we don't have
other users now...

 (I changed end to last because I think end makes it sound like it's
 the first value after the interval, while last makes it clear that
 it's the last value in the interval)
  This may be a useful change. I'll use that I think.

  +/*
  + * Implementation of range locks.
  + *
  + * We keep interval tree of locked and to-be-locked ranges. When new range 
  lock
  + * is requested, we add its interval to the tree and store number of 
  intervals
  + * intersecting it to 'blocking_ranges'.
  + *
  + * When a range is unlocked, we again walk intervals that intersect with 
  the
  + * unlocked one and decrement their 'blocking_ranges'.  We wake up owner 
  of any
  + * range lock whose 'blocking_ranges' drops to 0.
  + */
 
 May be worth repeating the comment about how this achieves fairness
 and avoids livelocks.
  Good idea. Added.

  +void range_lock_init(struct range_lock *lock, unsigned long start,
  +unsigned long end)
  +{
  +   lock-node.start = start;
  +   lock-node.last = 

Re: [PATCH 1/6] lib: Implement range locks

2013-02-11 Thread Michel Lespinasse
On Mon, Feb 11, 2013 at 2:27 AM, Jan Kara j...@suse.cz wrote:
 On Sun 10-02-13 21:42:32, Michel Lespinasse wrote:
 On Thu, Jan 31, 2013 at 1:49 PM, Jan Kara j...@suse.cz wrote:
  +void range_lock_init(struct range_lock *lock, unsigned long start,
  +unsigned long end);
  +void range_lock(struct range_lock_tree *tree, struct range_lock *lock);
  +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);

 Is there a point to separating the init and lock stages ? maybe the API 
 could be
 void range_lock(struct range_lock_tree *tree, struct range_lock *lock,
 unsigned long start, unsigned long last);
 void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
   I was thinking about this as well. Currently I don't have a place which
 would make it beneficial to separate _init and _lock but I can imagine such
 uses (where you don't want to pass the interval information down the stack
 and it's easier to pass the whole lock structure). Also it looks a bit
 confusing to pass (tree, lock, start, last) to the locking functon. So I
 left it there.

 OTOH I had to somewhat change the API so that the locking phase is now
 separated in lock_prep phase which inserts the node into the tree and
 counts blocking ranges and wait phase which waits for the blocking ranges
 to unlock. The reason for this split is that while lock_prep needs to
 happen under some lock synchronizing operations on the tree, wait phase
 can be easily lockless. So this allows me to remove the knowledge of how
 operations on the tree are synchronized from range locking code itself.
 That further allowed me to use mapping-tree_lock for synchronization and
 basically reduce the cost of mapping range locking close to 0 for buffered
 IO (just a single tree lookup in the tree in the fast path).

Ah yes, being able to externalize the lock is good.

I think in this case, it makes the most sense for lock_prep phase to
also initialize the lock node, though.

 Reviewed-by: Michel Lespinasse wal...@google.com
   I actually didn't add this because there are some differences in the
 current version...

Did I miss another posting of yours, or is that coming up ?

Cheers,

-- 
Michel Walken Lespinasse
A program is never fully debugged until the last user dies.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/6] lib: Implement range locks

2013-02-11 Thread Jan Kara
On Mon 11-02-13 03:03:30, Michel Lespinasse wrote:
 On Mon, Feb 11, 2013 at 2:27 AM, Jan Kara j...@suse.cz wrote:
  On Sun 10-02-13 21:42:32, Michel Lespinasse wrote:
  On Thu, Jan 31, 2013 at 1:49 PM, Jan Kara j...@suse.cz wrote:
   +void range_lock_init(struct range_lock *lock, unsigned long start,
   +unsigned long end);
   +void range_lock(struct range_lock_tree *tree, struct range_lock *lock);
   +void range_unlock(struct range_lock_tree *tree, struct range_lock 
   *lock);
 
  Is there a point to separating the init and lock stages ? maybe the API 
  could be
  void range_lock(struct range_lock_tree *tree, struct range_lock *lock,
  unsigned long start, unsigned long last);
  void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);
I was thinking about this as well. Currently I don't have a place which
  would make it beneficial to separate _init and _lock but I can imagine such
  uses (where you don't want to pass the interval information down the stack
  and it's easier to pass the whole lock structure). Also it looks a bit
  confusing to pass (tree, lock, start, last) to the locking functon. So I
  left it there.
 
  OTOH I had to somewhat change the API so that the locking phase is now
  separated in lock_prep phase which inserts the node into the tree and
  counts blocking ranges and wait phase which waits for the blocking ranges
  to unlock. The reason for this split is that while lock_prep needs to
  happen under some lock synchronizing operations on the tree, wait phase
  can be easily lockless. So this allows me to remove the knowledge of how
  operations on the tree are synchronized from range locking code itself.
  That further allowed me to use mapping-tree_lock for synchronization and
  basically reduce the cost of mapping range locking close to 0 for buffered
  IO (just a single tree lookup in the tree in the fast path).
 
 Ah yes, being able to externalize the lock is good.
 
 I think in this case, it makes the most sense for lock_prep phase to
 also initialize the lock node, though.
  I guess so.

  Reviewed-by: Michel Lespinasse wal...@google.com
I actually didn't add this because there are some differences in the
  current version...
 
 Did I miss another posting of yours, or is that coming up ?
  That will come. But as Dave Chinner pointed out for buffered writes we
should rather lock the whole range specified in the syscall (to avoid
strange results of racing truncate / write when i_mutex isn't used) and
that requires us to put the range lock above mmap_sem which isn't currently
easily possible due to page fault handling... So if the whole patch set
should go anywhere I need to solve that somehow.

Honza
-- 
Jan Kara j...@suse.cz
SUSE Labs, CR
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/6] lib: Implement range locks

2013-02-10 Thread Michel Lespinasse
Hi Jan,

On Thu, Jan 31, 2013 at 1:49 PM, Jan Kara  wrote:
> Implement range locking using interval tree.

Yay! I like to see interval trees being put to good use.

> +/*
> + * Range locking
> + *
> + * We allow exclusive locking of arbitrary ranges. We guarantee that each
> + * range is locked only after all conflicting range locks requested 
> previously
> + * have been unlocked. Thus we achieve fairness and avoid livelocks.
> + *
> + * The cost of lock and unlock of a range is O(log(R_all)+R_int) where R_all 
> is
> + * total number of ranges and R_int is the number of ranges intersecting the
> + * operated range.
> + */

I think the cost is actually O((1+R_int)log(R_all)) as each
interval_tree_iter_{first,next} call is O(log(R_all))

Not that it'll make a huge difference in practice - the cost will be
cheap enough either way.

> +struct range_lock {
> +   struct interval_tree_node node;
> +   struct task_struct *task;
> +   /* Number of ranges which are blocking acquisition of the lock */
s/ranges/previously requested ranges/

I think it's worth writing this down as I originally found this confusing.

BTW, I like how you only count previously requested ranges in order to
guarantee fairness. This was absolutely not obvious to me.

> +#define RANGE_LOCK_INITIALIZER(start, end) {\
> +   .node = {\
> +   .start = (start),\
> +   .end = (end)\
> +   }\
> +}

I have not found any uses of this, but it seems it wouldn't work as
you want .last instead of .end

BTW, it's important to make it clear that last is the last value that
is *included* in the interval, not the first value that follows it.

> +void range_lock_init(struct range_lock *lock, unsigned long start,
> +unsigned long end);
> +void range_lock(struct range_lock_tree *tree, struct range_lock *lock);
> +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);

Is there a point to separating the init and lock stages ? maybe the API could be
void range_lock(struct range_lock_tree *tree, struct range_lock *lock,
unsigned long start, unsigned long last);
void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);

(I changed end to last because I think end makes it sound like it's
the first value after the interval, while last makes it clear that
it's the last value in the interval)

> +/*
> + * Implementation of range locks.
> + *
> + * We keep interval tree of locked and to-be-locked ranges. When new range 
> lock
> + * is requested, we add its interval to the tree and store number of 
> intervals
> + * intersecting it to 'blocking_ranges'.
> + *
> + * When a range is unlocked, we again walk intervals that intersect with the
> + * unlocked one and decrement their 'blocking_ranges'.  We wake up owner of 
> any
> + * range lock whose 'blocking_ranges' drops to 0.
> + */

May be worth repeating the comment about how this achieves fairness
and avoids livelocks.

> +void range_lock_init(struct range_lock *lock, unsigned long start,
> +unsigned long end)
> +{
> +   lock->node.start = start;
> +   lock->node.last = end;
> +   RB_CLEAR_NODE(>node.rb);

I really wish people didn't unnecessarily use RB_CLEAR_NODE before
inserting nodes in an rbtree.
RB_CLEAR_NODE is never necessary unless you want to tag unused nodes
and check them later using RB_EMPTY_NODES.

> +void range_lock(struct range_lock_tree *tree, struct range_lock *lock)
> +{
> +   struct interval_tree_node *node;
> +   unsigned long flags;
> +
> +   spin_lock_irqsave(>lock, flags);

Are you expecting range locks to be used from hardirq context ? If
not, it may be more appropriate to just use spin_lock_bh ?

> +   node = interval_tree_iter_first(>root, lock->node.start,
> +   lock->node.last);
> +   while (node) {
> +   lock->blocking_ranges++;
> +   node = interval_tree_iter_next(node, lock->node.start,
> +  lock->node.last);
> +   }

Nitpicking here, but I think this is slightly easier to read as a for loop:
for (node = interval_tree_iter_first(...);
 node;
 node = interval_tree_iter_next(...))
lock->blocking_ranges++;

> +   /* Do we need to go to sleep? */
> +   while (lock->blocking_ranges) {
> +   lock->task = current;
> +   __set_current_state(TASK_UNINTERRUPTIBLE);
> +   spin_unlock_irqrestore(>lock, flags);
> +   schedule();
> +   spin_lock_irqsave(>lock, flags);
> +   }
> +   spin_unlock_irqrestore(>lock, flags);

I think I would prefer:
lock->task = tsk = current;
spin_unlock_irqrestore(>lock, flags);
while (true) {
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
if (!lock->blocking_ranges)
break;
schedule();
}
set_task_state(tsk, 

Re: [PATCH 1/6] lib: Implement range locks

2013-02-10 Thread Michel Lespinasse
Hi Jan,

On Thu, Jan 31, 2013 at 1:49 PM, Jan Kara j...@suse.cz wrote:
 Implement range locking using interval tree.

Yay! I like to see interval trees being put to good use.

 +/*
 + * Range locking
 + *
 + * We allow exclusive locking of arbitrary ranges. We guarantee that each
 + * range is locked only after all conflicting range locks requested 
 previously
 + * have been unlocked. Thus we achieve fairness and avoid livelocks.
 + *
 + * The cost of lock and unlock of a range is O(log(R_all)+R_int) where R_all 
 is
 + * total number of ranges and R_int is the number of ranges intersecting the
 + * operated range.
 + */

I think the cost is actually O((1+R_int)log(R_all)) as each
interval_tree_iter_{first,next} call is O(log(R_all))

Not that it'll make a huge difference in practice - the cost will be
cheap enough either way.

 +struct range_lock {
 +   struct interval_tree_node node;
 +   struct task_struct *task;
 +   /* Number of ranges which are blocking acquisition of the lock */
s/ranges/previously requested ranges/

I think it's worth writing this down as I originally found this confusing.

BTW, I like how you only count previously requested ranges in order to
guarantee fairness. This was absolutely not obvious to me.

 +#define RANGE_LOCK_INITIALIZER(start, end) {\
 +   .node = {\
 +   .start = (start),\
 +   .end = (end)\
 +   }\
 +}

I have not found any uses of this, but it seems it wouldn't work as
you want .last instead of .end

BTW, it's important to make it clear that last is the last value that
is *included* in the interval, not the first value that follows it.

 +void range_lock_init(struct range_lock *lock, unsigned long start,
 +unsigned long end);
 +void range_lock(struct range_lock_tree *tree, struct range_lock *lock);
 +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);

Is there a point to separating the init and lock stages ? maybe the API could be
void range_lock(struct range_lock_tree *tree, struct range_lock *lock,
unsigned long start, unsigned long last);
void range_unlock(struct range_lock_tree *tree, struct range_lock *lock);

(I changed end to last because I think end makes it sound like it's
the first value after the interval, while last makes it clear that
it's the last value in the interval)

 +/*
 + * Implementation of range locks.
 + *
 + * We keep interval tree of locked and to-be-locked ranges. When new range 
 lock
 + * is requested, we add its interval to the tree and store number of 
 intervals
 + * intersecting it to 'blocking_ranges'.
 + *
 + * When a range is unlocked, we again walk intervals that intersect with the
 + * unlocked one and decrement their 'blocking_ranges'.  We wake up owner of 
 any
 + * range lock whose 'blocking_ranges' drops to 0.
 + */

May be worth repeating the comment about how this achieves fairness
and avoids livelocks.

 +void range_lock_init(struct range_lock *lock, unsigned long start,
 +unsigned long end)
 +{
 +   lock-node.start = start;
 +   lock-node.last = end;
 +   RB_CLEAR_NODE(lock-node.rb);

I really wish people didn't unnecessarily use RB_CLEAR_NODE before
inserting nodes in an rbtree.
RB_CLEAR_NODE is never necessary unless you want to tag unused nodes
and check them later using RB_EMPTY_NODES.

 +void range_lock(struct range_lock_tree *tree, struct range_lock *lock)
 +{
 +   struct interval_tree_node *node;
 +   unsigned long flags;
 +
 +   spin_lock_irqsave(tree-lock, flags);

Are you expecting range locks to be used from hardirq context ? If
not, it may be more appropriate to just use spin_lock_bh ?

 +   node = interval_tree_iter_first(tree-root, lock-node.start,
 +   lock-node.last);
 +   while (node) {
 +   lock-blocking_ranges++;
 +   node = interval_tree_iter_next(node, lock-node.start,
 +  lock-node.last);
 +   }

Nitpicking here, but I think this is slightly easier to read as a for loop:
for (node = interval_tree_iter_first(...);
 node;
 node = interval_tree_iter_next(...))
lock-blocking_ranges++;

 +   /* Do we need to go to sleep? */
 +   while (lock-blocking_ranges) {
 +   lock-task = current;
 +   __set_current_state(TASK_UNINTERRUPTIBLE);
 +   spin_unlock_irqrestore(tree-lock, flags);
 +   schedule();
 +   spin_lock_irqsave(tree-lock, flags);
 +   }
 +   spin_unlock_irqrestore(tree-lock, flags);

I think I would prefer:
lock-task = tsk = current;
spin_unlock_irqrestore(tree-lock, flags);
while (true) {
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
if (!lock-blocking_ranges)
break;
schedule();
}
set_task_state(tsk, TASK_RUNNING);

This avoids an unnecessary spinlock 

Re: [PATCH 1/6] lib: Implement range locks

2013-02-04 Thread Jan Kara
On Thu 31-01-13 15:57:26, Andrew Morton wrote:
> On Thu, 31 Jan 2013 22:49:49 +0100
> Jan Kara  wrote:
> 
> > Implement range locking using interval tree.
> > 
> > ...
> >
> > +void range_lock(struct range_lock_tree *tree, struct range_lock *lock)
> > +{
> > +   struct interval_tree_node *node;
> > +   unsigned long flags;
> > +
> > +   spin_lock_irqsave(>lock, flags);
> > +   node = interval_tree_iter_first(>root, lock->node.start,
> > +   lock->node.last);
> > +   while (node) {
> > +   lock->blocking_ranges++;
> > +   node = interval_tree_iter_next(node, lock->node.start,
> > +  lock->node.last);
> > +   }
> > +   interval_tree_insert(>node, >root);
> > +   /* Do we need to go to sleep? */
> > +   while (lock->blocking_ranges) {
> > +   lock->task = current;
> > +   __set_current_state(TASK_UNINTERRUPTIBLE);
> > +   spin_unlock_irqrestore(>lock, flags);
> > +   schedule();
> > +   spin_lock_irqsave(>lock, flags);
> > +   }
> > +   spin_unlock_irqrestore(>lock, flags);
> > +}
> >
> > ...
> >
> > +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock)
> > +{
> > +   struct interval_tree_node *node;
> > +   unsigned long flags;
> > +
> > +   spin_lock_irqsave(>lock, flags);
> > +   interval_tree_remove(>node, >root);
> > +   node = interval_tree_iter_first(>root, lock->node.start,
> > +   lock->node.last);
> > +   while (node) {
> > +   range_lock_unblock((struct range_lock *)node);
> > +   node = interval_tree_iter_next(node, lock->node.start,
> > +  lock->node.last);
> > +   }
> > +   spin_unlock_irqrestore(>lock, flags);
> > +}
> 
> What are the worst-case interrupt-off durations here?
  Good question. In theory, it could be relatively long, e.g. when there
are lots of DIOs in flight against a range which is locked. I'll do some
measurements to get some idea.
 
> I note that the new exported functions in this patchset are
> refreshingly free of documentation ;)
  They are *so* obvious ;) Point taken... Thanks for review.

Honza
-- 
Jan Kara 
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/6] lib: Implement range locks

2013-02-04 Thread Jan Kara
On Thu 31-01-13 15:57:26, Andrew Morton wrote:
 On Thu, 31 Jan 2013 22:49:49 +0100
 Jan Kara j...@suse.cz wrote:
 
  Implement range locking using interval tree.
  
  ...
 
  +void range_lock(struct range_lock_tree *tree, struct range_lock *lock)
  +{
  +   struct interval_tree_node *node;
  +   unsigned long flags;
  +
  +   spin_lock_irqsave(tree-lock, flags);
  +   node = interval_tree_iter_first(tree-root, lock-node.start,
  +   lock-node.last);
  +   while (node) {
  +   lock-blocking_ranges++;
  +   node = interval_tree_iter_next(node, lock-node.start,
  +  lock-node.last);
  +   }
  +   interval_tree_insert(lock-node, tree-root);
  +   /* Do we need to go to sleep? */
  +   while (lock-blocking_ranges) {
  +   lock-task = current;
  +   __set_current_state(TASK_UNINTERRUPTIBLE);
  +   spin_unlock_irqrestore(tree-lock, flags);
  +   schedule();
  +   spin_lock_irqsave(tree-lock, flags);
  +   }
  +   spin_unlock_irqrestore(tree-lock, flags);
  +}
 
  ...
 
  +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock)
  +{
  +   struct interval_tree_node *node;
  +   unsigned long flags;
  +
  +   spin_lock_irqsave(tree-lock, flags);
  +   interval_tree_remove(lock-node, tree-root);
  +   node = interval_tree_iter_first(tree-root, lock-node.start,
  +   lock-node.last);
  +   while (node) {
  +   range_lock_unblock((struct range_lock *)node);
  +   node = interval_tree_iter_next(node, lock-node.start,
  +  lock-node.last);
  +   }
  +   spin_unlock_irqrestore(tree-lock, flags);
  +}
 
 What are the worst-case interrupt-off durations here?
  Good question. In theory, it could be relatively long, e.g. when there
are lots of DIOs in flight against a range which is locked. I'll do some
measurements to get some idea.
 
 I note that the new exported functions in this patchset are
 refreshingly free of documentation ;)
  They are *so* obvious ;) Point taken... Thanks for review.

Honza
-- 
Jan Kara j...@suse.cz
SUSE Labs, CR
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/6] lib: Implement range locks

2013-01-31 Thread Andrew Morton
On Thu, 31 Jan 2013 22:49:49 +0100
Jan Kara  wrote:

> Implement range locking using interval tree.
> 
> ...
>
> +void range_lock(struct range_lock_tree *tree, struct range_lock *lock)
> +{
> + struct interval_tree_node *node;
> + unsigned long flags;
> +
> + spin_lock_irqsave(>lock, flags);
> + node = interval_tree_iter_first(>root, lock->node.start,
> + lock->node.last);
> + while (node) {
> + lock->blocking_ranges++;
> + node = interval_tree_iter_next(node, lock->node.start,
> +lock->node.last);
> + }
> + interval_tree_insert(>node, >root);
> + /* Do we need to go to sleep? */
> + while (lock->blocking_ranges) {
> + lock->task = current;
> + __set_current_state(TASK_UNINTERRUPTIBLE);
> + spin_unlock_irqrestore(>lock, flags);
> + schedule();
> + spin_lock_irqsave(>lock, flags);
> + }
> + spin_unlock_irqrestore(>lock, flags);
> +}
>
> ...
>
> +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock)
> +{
> + struct interval_tree_node *node;
> + unsigned long flags;
> +
> + spin_lock_irqsave(>lock, flags);
> + interval_tree_remove(>node, >root);
> + node = interval_tree_iter_first(>root, lock->node.start,
> + lock->node.last);
> + while (node) {
> + range_lock_unblock((struct range_lock *)node);
> + node = interval_tree_iter_next(node, lock->node.start,
> +lock->node.last);
> + }
> + spin_unlock_irqrestore(>lock, flags);
> +}

What are the worst-case interrupt-off durations here?

I note that the new exported functions in this patchset are
refreshingly free of documentation ;)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/6] lib: Implement range locks

2013-01-31 Thread Andrew Morton
On Thu, 31 Jan 2013 22:49:49 +0100
Jan Kara j...@suse.cz wrote:

 Implement range locking using interval tree.
 
 ...

 +void range_lock(struct range_lock_tree *tree, struct range_lock *lock)
 +{
 + struct interval_tree_node *node;
 + unsigned long flags;
 +
 + spin_lock_irqsave(tree-lock, flags);
 + node = interval_tree_iter_first(tree-root, lock-node.start,
 + lock-node.last);
 + while (node) {
 + lock-blocking_ranges++;
 + node = interval_tree_iter_next(node, lock-node.start,
 +lock-node.last);
 + }
 + interval_tree_insert(lock-node, tree-root);
 + /* Do we need to go to sleep? */
 + while (lock-blocking_ranges) {
 + lock-task = current;
 + __set_current_state(TASK_UNINTERRUPTIBLE);
 + spin_unlock_irqrestore(tree-lock, flags);
 + schedule();
 + spin_lock_irqsave(tree-lock, flags);
 + }
 + spin_unlock_irqrestore(tree-lock, flags);
 +}

 ...

 +void range_unlock(struct range_lock_tree *tree, struct range_lock *lock)
 +{
 + struct interval_tree_node *node;
 + unsigned long flags;
 +
 + spin_lock_irqsave(tree-lock, flags);
 + interval_tree_remove(lock-node, tree-root);
 + node = interval_tree_iter_first(tree-root, lock-node.start,
 + lock-node.last);
 + while (node) {
 + range_lock_unblock((struct range_lock *)node);
 + node = interval_tree_iter_next(node, lock-node.start,
 +lock-node.last);
 + }
 + spin_unlock_irqrestore(tree-lock, flags);
 +}

What are the worst-case interrupt-off durations here?

I note that the new exported functions in this patchset are
refreshingly free of documentation ;)

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/