Re: [PATCH 03/11] ftrace: Cleanup regex_lock and ftrace_lock around hash updating

2013-05-09 Thread Steven Rostedt
On Thu, 2013-05-09 at 13:12 -0400, Steven Rostedt wrote:
> On Thu, 2013-05-09 at 14:44 +0900, Masami Hiramatsu wrote:
> > Cleanup regex_lock and ftrace_lock locking points around
> > ftrace_ops hash update code.
> > 
> > The new rule is that regex_lock protects ops->*_hash
> > read-update-write code for each ftrace_ops. Usually,
> > hash update is done by following sequence.
> > 
> > 1. allocate a new local hash and copy the original hash.
> > 2. update the local hash.
> > 3. move(actually, copy) back the local hash to ftrace_ops.
> > 4. update ftrace entries if needed.
> > 5. release the local hash.
> > 
> > This makes regex_lock to protect #1-#4, and ftrace_lock
> > to protect #3 and #4.
> 
> Patch looks good, but I don't see how ftrace_lock protects #3. The two
> things that I see ftrace_lock protecting is the update to the ftrace
> entries, and the adding and removing ftrace_ops to the ftrace_ops_list.

OK, yeah, the move does an implicit entries update. Thus it does need
the lock. I'll update the change log to imply that.

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 03/11] ftrace: Cleanup regex_lock and ftrace_lock around hash updating

2013-05-09 Thread Steven Rostedt
On Thu, 2013-05-09 at 14:44 +0900, Masami Hiramatsu wrote:
> Cleanup regex_lock and ftrace_lock locking points around
> ftrace_ops hash update code.
> 
> The new rule is that regex_lock protects ops->*_hash
> read-update-write code for each ftrace_ops. Usually,
> hash update is done by following sequence.
> 
> 1. allocate a new local hash and copy the original hash.
> 2. update the local hash.
> 3. move(actually, copy) back the local hash to ftrace_ops.
> 4. update ftrace entries if needed.
> 5. release the local hash.
> 
> This makes regex_lock to protect #1-#4, and ftrace_lock
> to protect #3 and #4.

Patch looks good, but I don't see how ftrace_lock protects #3. The two
things that I see ftrace_lock protecting is the update to the ftrace
entries, and the adding and removing ftrace_ops to the ftrace_ops_list.

-- Steve

> 
> Signed-off-by: Masami Hiramatsu 
> Cc: Steven Rostedt 
> Cc: Frederic Weisbecker 
> Cc: Ingo Molnar 
> Cc: Tom Zanussi 
> ---


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 03/11] ftrace: Cleanup regex_lock and ftrace_lock around hash updating

2013-05-09 Thread Steven Rostedt
On Thu, 2013-05-09 at 14:44 +0900, Masami Hiramatsu wrote:
 Cleanup regex_lock and ftrace_lock locking points around
 ftrace_ops hash update code.
 
 The new rule is that regex_lock protects ops-*_hash
 read-update-write code for each ftrace_ops. Usually,
 hash update is done by following sequence.
 
 1. allocate a new local hash and copy the original hash.
 2. update the local hash.
 3. move(actually, copy) back the local hash to ftrace_ops.
 4. update ftrace entries if needed.
 5. release the local hash.
 
 This makes regex_lock to protect #1-#4, and ftrace_lock
 to protect #3 and #4.

Patch looks good, but I don't see how ftrace_lock protects #3. The two
things that I see ftrace_lock protecting is the update to the ftrace
entries, and the adding and removing ftrace_ops to the ftrace_ops_list.

-- Steve

 
 Signed-off-by: Masami Hiramatsu masami.hiramatsu...@hitachi.com
 Cc: Steven Rostedt rost...@goodmis.org
 Cc: Frederic Weisbecker fweis...@gmail.com
 Cc: Ingo Molnar mi...@redhat.com
 Cc: Tom Zanussi tom.zanu...@intel.com
 ---


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 03/11] ftrace: Cleanup regex_lock and ftrace_lock around hash updating

2013-05-09 Thread Steven Rostedt
On Thu, 2013-05-09 at 13:12 -0400, Steven Rostedt wrote:
 On Thu, 2013-05-09 at 14:44 +0900, Masami Hiramatsu wrote:
  Cleanup regex_lock and ftrace_lock locking points around
  ftrace_ops hash update code.
  
  The new rule is that regex_lock protects ops-*_hash
  read-update-write code for each ftrace_ops. Usually,
  hash update is done by following sequence.
  
  1. allocate a new local hash and copy the original hash.
  2. update the local hash.
  3. move(actually, copy) back the local hash to ftrace_ops.
  4. update ftrace entries if needed.
  5. release the local hash.
  
  This makes regex_lock to protect #1-#4, and ftrace_lock
  to protect #3 and #4.
 
 Patch looks good, but I don't see how ftrace_lock protects #3. The two
 things that I see ftrace_lock protecting is the update to the ftrace
 entries, and the adding and removing ftrace_ops to the ftrace_ops_list.

OK, yeah, the move does an implicit entries update. Thus it does need
the lock. I'll update the change log to imply that.

-- Steve


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 03/11] ftrace: Cleanup regex_lock and ftrace_lock around hash updating

2013-05-08 Thread Masami Hiramatsu
Cleanup regex_lock and ftrace_lock locking points around
ftrace_ops hash update code.

The new rule is that regex_lock protects ops->*_hash
read-update-write code for each ftrace_ops. Usually,
hash update is done by following sequence.

1. allocate a new local hash and copy the original hash.
2. update the local hash.
3. move(actually, copy) back the local hash to ftrace_ops.
4. update ftrace entries if needed.
5. release the local hash.

This makes regex_lock to protect #1-#4, and ftrace_lock
to protect #3 and #4.

Signed-off-by: Masami Hiramatsu 
Cc: Steven Rostedt 
Cc: Frederic Weisbecker 
Cc: Ingo Molnar 
Cc: Tom Zanussi 
---
 kernel/trace/ftrace.c |   59 +++--
 1 file changed, 32 insertions(+), 27 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 3f29b3d..0575b3f 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2644,28 +2644,26 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag,
return -ENOMEM;
}
 
+   iter->ops = ops;
+   iter->flags = flag;
+
+   mutex_lock(>regex_lock);
+
if (flag & FTRACE_ITER_NOTRACE)
hash = ops->notrace_hash;
else
hash = ops->filter_hash;
 
-   iter->ops = ops;
-   iter->flags = flag;
-
if (file->f_mode & FMODE_WRITE) {
-   mutex_lock(_lock);
iter->hash = 
alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, hash);
-   mutex_unlock(_lock);
-
if (!iter->hash) {
trace_parser_put(>parser);
kfree(iter);
-   return -ENOMEM;
+   ret = -ENOMEM;
+   goto out_unlock;
}
}
 
-   mutex_lock(>regex_lock);
-
if ((file->f_mode & FMODE_WRITE) &&
(file->f_flags & O_TRUNC))
ftrace_filter_reset(iter->hash);
@@ -2685,6 +2683,8 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag,
}
} else
file->private_data = iter;
+
+ out_unlock:
mutex_unlock(>regex_lock);
 
return ret;
@@ -2999,7 +2999,7 @@ register_ftrace_function_probe(char *glob, struct 
ftrace_probe_ops *ops,
if (WARN_ON(not))
return -EINVAL;
 
-   mutex_lock(_lock);
+   mutex_lock(_probe_ops.regex_lock);
 
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
if (!hash) {
@@ -3057,14 +3057,16 @@ register_ftrace_function_probe(char *glob, struct 
ftrace_probe_ops *ops,
 
} while_for_each_ftrace_rec();
 
+   mutex_lock(_lock);
ret = ftrace_hash_move(_probe_ops, 1, orig_hash, hash);
if (ret < 0)
count = ret;
 
__enable_ftrace_function_probe();
+   mutex_unlock(_lock);
 
  out_unlock:
-   mutex_unlock(_lock);
+   mutex_unlock(_probe_ops.regex_lock);
free_ftrace_hash(hash);
 
return count;
@@ -3104,7 +3106,7 @@ __unregister_ftrace_function_probe(char *glob, struct 
ftrace_probe_ops *ops,
return;
}
 
-   mutex_lock(_lock);
+   mutex_lock(_probe_ops.regex_lock);
 
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
if (!hash)
@@ -3142,6 +3144,7 @@ __unregister_ftrace_function_probe(char *glob, struct 
ftrace_probe_ops *ops,
list_add(>free_list, _list);
}
}
+   mutex_lock(_lock);
__disable_ftrace_function_probe();
/*
 * Remove after the disable is called. Otherwise, if the last
@@ -3153,9 +3156,10 @@ __unregister_ftrace_function_probe(char *glob, struct 
ftrace_probe_ops *ops,
list_del(>free_list);
ftrace_free_entry(entry);
}
+   mutex_unlock(_lock);

  out_unlock:
-   mutex_unlock(_lock);
+   mutex_unlock(_probe_ops.regex_lock);
free_ftrace_hash(hash);
 }
 
@@ -3271,11 +3275,10 @@ ftrace_regex_write(struct file *file, const char __user 
*ubuf,
} else
iter = file->private_data;
 
-   mutex_lock(>ops->regex_lock);
-
-   ret = -ENODEV;
if (unlikely(ftrace_disabled))
-   goto out_unlock;
+   return -ENODEV;
+
+   /* iter->hash is a local copy, so we don't need regex_lock */
 
parser = >parser;
read = trace_get_user(parser, ubuf, cnt, ppos);
@@ -3286,13 +3289,11 @@ ftrace_regex_write(struct file *file, const char __user 
*ubuf,
   parser->idx, enable);
trace_parser_clear(parser);
if (ret)
-   goto out_unlock;
+   goto out;
}
 
ret = read;
-out_unlock:
-   mutex_unlock(>ops->regex_lock);
-
+ out:
return ret;
 }
 
@@ -3344,16 +3345,19 @@ ftrace_set_hash(struct ftrace_ops *ops, unsigned char 

[PATCH 03/11] ftrace: Cleanup regex_lock and ftrace_lock around hash updating

2013-05-08 Thread Masami Hiramatsu
Cleanup regex_lock and ftrace_lock locking points around
ftrace_ops hash update code.

The new rule is that regex_lock protects ops-*_hash
read-update-write code for each ftrace_ops. Usually,
hash update is done by following sequence.

1. allocate a new local hash and copy the original hash.
2. update the local hash.
3. move(actually, copy) back the local hash to ftrace_ops.
4. update ftrace entries if needed.
5. release the local hash.

This makes regex_lock to protect #1-#4, and ftrace_lock
to protect #3 and #4.

Signed-off-by: Masami Hiramatsu masami.hiramatsu...@hitachi.com
Cc: Steven Rostedt rost...@goodmis.org
Cc: Frederic Weisbecker fweis...@gmail.com
Cc: Ingo Molnar mi...@redhat.com
Cc: Tom Zanussi tom.zanu...@intel.com
---
 kernel/trace/ftrace.c |   59 +++--
 1 file changed, 32 insertions(+), 27 deletions(-)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 3f29b3d..0575b3f 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -2644,28 +2644,26 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag,
return -ENOMEM;
}
 
+   iter-ops = ops;
+   iter-flags = flag;
+
+   mutex_lock(ops-regex_lock);
+
if (flag  FTRACE_ITER_NOTRACE)
hash = ops-notrace_hash;
else
hash = ops-filter_hash;
 
-   iter-ops = ops;
-   iter-flags = flag;
-
if (file-f_mode  FMODE_WRITE) {
-   mutex_lock(ftrace_lock);
iter-hash = 
alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, hash);
-   mutex_unlock(ftrace_lock);
-
if (!iter-hash) {
trace_parser_put(iter-parser);
kfree(iter);
-   return -ENOMEM;
+   ret = -ENOMEM;
+   goto out_unlock;
}
}
 
-   mutex_lock(ops-regex_lock);
-
if ((file-f_mode  FMODE_WRITE) 
(file-f_flags  O_TRUNC))
ftrace_filter_reset(iter-hash);
@@ -2685,6 +2683,8 @@ ftrace_regex_open(struct ftrace_ops *ops, int flag,
}
} else
file-private_data = iter;
+
+ out_unlock:
mutex_unlock(ops-regex_lock);
 
return ret;
@@ -2999,7 +2999,7 @@ register_ftrace_function_probe(char *glob, struct 
ftrace_probe_ops *ops,
if (WARN_ON(not))
return -EINVAL;
 
-   mutex_lock(ftrace_lock);
+   mutex_lock(trace_probe_ops.regex_lock);
 
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
if (!hash) {
@@ -3057,14 +3057,16 @@ register_ftrace_function_probe(char *glob, struct 
ftrace_probe_ops *ops,
 
} while_for_each_ftrace_rec();
 
+   mutex_lock(ftrace_lock);
ret = ftrace_hash_move(trace_probe_ops, 1, orig_hash, hash);
if (ret  0)
count = ret;
 
__enable_ftrace_function_probe();
+   mutex_unlock(ftrace_lock);
 
  out_unlock:
-   mutex_unlock(ftrace_lock);
+   mutex_unlock(trace_probe_ops.regex_lock);
free_ftrace_hash(hash);
 
return count;
@@ -3104,7 +3106,7 @@ __unregister_ftrace_function_probe(char *glob, struct 
ftrace_probe_ops *ops,
return;
}
 
-   mutex_lock(ftrace_lock);
+   mutex_lock(trace_probe_ops.regex_lock);
 
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
if (!hash)
@@ -3142,6 +3144,7 @@ __unregister_ftrace_function_probe(char *glob, struct 
ftrace_probe_ops *ops,
list_add(entry-free_list, free_list);
}
}
+   mutex_lock(ftrace_lock);
__disable_ftrace_function_probe();
/*
 * Remove after the disable is called. Otherwise, if the last
@@ -3153,9 +3156,10 @@ __unregister_ftrace_function_probe(char *glob, struct 
ftrace_probe_ops *ops,
list_del(entry-free_list);
ftrace_free_entry(entry);
}
+   mutex_unlock(ftrace_lock);

  out_unlock:
-   mutex_unlock(ftrace_lock);
+   mutex_unlock(trace_probe_ops.regex_lock);
free_ftrace_hash(hash);
 }
 
@@ -3271,11 +3275,10 @@ ftrace_regex_write(struct file *file, const char __user 
*ubuf,
} else
iter = file-private_data;
 
-   mutex_lock(iter-ops-regex_lock);
-
-   ret = -ENODEV;
if (unlikely(ftrace_disabled))
-   goto out_unlock;
+   return -ENODEV;
+
+   /* iter-hash is a local copy, so we don't need regex_lock */
 
parser = iter-parser;
read = trace_get_user(parser, ubuf, cnt, ppos);
@@ -3286,13 +3289,11 @@ ftrace_regex_write(struct file *file, const char __user 
*ubuf,
   parser-idx, enable);
trace_parser_clear(parser);
if (ret)
-   goto out_unlock;
+   goto out;