On Tue, Jan 17, 2017 at 08:12:20AM +0100, Peter Zijlstra wrote:
> On Tue, Jan 17, 2017 at 11:05:42AM +0900, Byungchul Park wrote:
> > On Mon, Jan 16, 2017 at 04:10:01PM +0100, Peter Zijlstra wrote:
> > > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
>
> > > > +
> > > > +
On Tue, Jan 17, 2017 at 08:12:20AM +0100, Peter Zijlstra wrote:
> On Tue, Jan 17, 2017 at 11:05:42AM +0900, Byungchul Park wrote:
> > On Mon, Jan 16, 2017 at 04:10:01PM +0100, Peter Zijlstra wrote:
> > > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
>
> > > > +
> > > > +
On Tue, Jan 17, 2017 at 08:14:56AM +0100, Peter Zijlstra wrote:
> On Tue, Jan 17, 2017 at 11:05:42AM +0900, Byungchul Park wrote:
> > On Mon, Jan 16, 2017 at 04:10:01PM +0100, Peter Zijlstra wrote:
> > > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
>
> > > > @@ -155,6 +164,9 @@
On Tue, Jan 17, 2017 at 08:14:56AM +0100, Peter Zijlstra wrote:
> On Tue, Jan 17, 2017 at 11:05:42AM +0900, Byungchul Park wrote:
> > On Mon, Jan 16, 2017 at 04:10:01PM +0100, Peter Zijlstra wrote:
> > > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
>
> > > > @@ -155,6 +164,9 @@
On Tue, Jan 17, 2017 at 02:24:08PM +0800, Boqun Feng wrote:
> On Tue, Jan 17, 2017 at 11:33:41AM +0900, Byungchul Park wrote:
> > On Mon, Jan 16, 2017 at 04:13:19PM +0100, Peter Zijlstra wrote:
> > > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > > > + /*
> > > > +
On Tue, Jan 17, 2017 at 02:24:08PM +0800, Boqun Feng wrote:
> On Tue, Jan 17, 2017 at 11:33:41AM +0900, Byungchul Park wrote:
> > On Mon, Jan 16, 2017 at 04:13:19PM +0100, Peter Zijlstra wrote:
> > > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > > > + /*
> > > > +
On Tue, Jan 17, 2017 at 11:05:42AM +0900, Byungchul Park wrote:
> On Mon, Jan 16, 2017 at 04:10:01PM +0100, Peter Zijlstra wrote:
> > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > > @@ -155,6 +164,9 @@ struct lockdep_map {
> > > int cpu;
> > >
On Tue, Jan 17, 2017 at 11:05:42AM +0900, Byungchul Park wrote:
> On Mon, Jan 16, 2017 at 04:10:01PM +0100, Peter Zijlstra wrote:
> > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > > @@ -155,6 +164,9 @@ struct lockdep_map {
> > > int cpu;
> > >
On Tue, Jan 17, 2017 at 11:05:42AM +0900, Byungchul Park wrote:
> On Mon, Jan 16, 2017 at 04:10:01PM +0100, Peter Zijlstra wrote:
> > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > > +
> > > + /*
> > > + * Whenever irq happens, these are updated so that we can
> > > + *
On Tue, Jan 17, 2017 at 11:05:42AM +0900, Byungchul Park wrote:
> On Mon, Jan 16, 2017 at 04:10:01PM +0100, Peter Zijlstra wrote:
> > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > > +
> > > + /*
> > > + * Whenever irq happens, these are updated so that we can
> > > + *
On Tue, Jan 17, 2017 at 11:33:41AM +0900, Byungchul Park wrote:
> On Mon, Jan 16, 2017 at 04:13:19PM +0100, Peter Zijlstra wrote:
> > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > > + /*
> > > + * We assign class_idx here redundantly even though following
> > > + * memcpy
On Tue, Jan 17, 2017 at 11:33:41AM +0900, Byungchul Park wrote:
> On Mon, Jan 16, 2017 at 04:13:19PM +0100, Peter Zijlstra wrote:
> > On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > > + /*
> > > + * We assign class_idx here redundantly even though following
> > > + * memcpy
On Mon, Jan 16, 2017 at 04:13:19PM +0100, Peter Zijlstra wrote:
> On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > + /*
> > +* We assign class_idx here redundantly even though following
> > +* memcpy will cover it, in order to ensure a rcu reader can
> > +* access
On Mon, Jan 16, 2017 at 04:13:19PM +0100, Peter Zijlstra wrote:
> On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> > + /*
> > +* We assign class_idx here redundantly even though following
> > +* memcpy will cover it, in order to ensure a rcu reader can
> > +* access
On Mon, Jan 16, 2017 at 04:10:01PM +0100, Peter Zijlstra wrote:
> On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
>
> > @@ -143,6 +149,9 @@ struct lock_class_stats lock_stats(struct lock_class
> > *class);
> > void clear_lock_stats(struct lock_class *class);
> > #endif
> >
>
On Mon, Jan 16, 2017 at 04:10:01PM +0100, Peter Zijlstra wrote:
> On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
>
> > @@ -143,6 +149,9 @@ struct lock_class_stats lock_stats(struct lock_class
> > *class);
> > void clear_lock_stats(struct lock_class *class);
> > #endif
> >
>
On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> + /*
> + * We assign class_idx here redundantly even though following
> + * memcpy will cover it, in order to ensure a rcu reader can
> + * access the class_idx atomically without lock.
> + *
> + * Here
On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> + /*
> + * We assign class_idx here redundantly even though following
> + * memcpy will cover it, in order to ensure a rcu reader can
> + * access the class_idx atomically without lock.
> + *
> + * Here
On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> @@ -143,6 +149,9 @@ struct lock_class_stats lock_stats(struct lock_class
> *class);
> void clear_lock_stats(struct lock_class *class);
> #endif
>
> +#ifdef CONFIG_LOCKDEP_CROSSRELEASE
> +struct cross_lock;
> +#endif
That
On Fri, Dec 09, 2016 at 02:12:03PM +0900, Byungchul Park wrote:
> @@ -143,6 +149,9 @@ struct lock_class_stats lock_stats(struct lock_class
> *class);
> void clear_lock_stats(struct lock_class *class);
> #endif
>
> +#ifdef CONFIG_LOCKDEP_CROSSRELEASE
> +struct cross_lock;
> +#endif
That
On Fri, Jan 13, 2017 at 12:39:04PM +0800, Lai Jiangshan wrote:
> > +
> > +/*
> > + * No contention. Irq disable is only required.
> > + */
> > +static int same_context_plock(struct pend_lock *plock)
> > +{
> > + struct task_struct *curr = current;
> > + int cpu = smp_processor_id();
>
On Fri, Jan 13, 2017 at 12:39:04PM +0800, Lai Jiangshan wrote:
> > +
> > +/*
> > + * No contention. Irq disable is only required.
> > + */
> > +static int same_context_plock(struct pend_lock *plock)
> > +{
> > + struct task_struct *curr = current;
> > + int cpu = smp_processor_id();
>
> +
> +/*
> + * No contention. Irq disable is only required.
> + */
> +static int same_context_plock(struct pend_lock *plock)
> +{
> + struct task_struct *curr = current;
> + int cpu = smp_processor_id();
> +
> + /* In the case of hardirq context */
> + if
> +
> +/*
> + * No contention. Irq disable is only required.
> + */
> +static int same_context_plock(struct pend_lock *plock)
> +{
> + struct task_struct *curr = current;
> + int cpu = smp_processor_id();
> +
> + /* In the case of hardirq context */
> + if
Crossrelease feature calls a lock 'crosslock' if it is releasable
in any context. For crosslock, all locks having been held in the
release context of the crosslock, until eventually the crosslock
will be released, have dependency with the crosslock.
Using crossrelease feature, we can detect
Crossrelease feature calls a lock 'crosslock' if it is releasable
in any context. For crosslock, all locks having been held in the
release context of the crosslock, until eventually the crosslock
will be released, have dependency with the crosslock.
Using crossrelease feature, we can detect
26 matches
Mail list logo