On Fri, Jan 23, 2015 at 03:45:55PM -0800, Jason Low wrote:
> On a side note, if we just move the cputimer->running = 1 to after the
> call to update_gt_cputime in thread_group_cputimer(), then we don't have
> to worry about concurrent adds occuring in this function?
Yeah, maybe.. There are a few
On Fri, Jan 23, 2015 at 03:45:55PM -0800, Jason Low wrote:
On a side note, if we just move the cputimer-running = 1 to after the
call to update_gt_cputime in thread_group_cputimer(), then we don't have
to worry about concurrent adds occuring in this function?
Yeah, maybe.. There are a few
On Fri, 2015-01-23 at 21:08 +0100, Peter Zijlstra wrote:
> On Fri, Jan 23, 2015 at 11:23:36AM -0800, Jason Low wrote:
> > On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
> > > On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> > > > +static void update_gt_cputime(struct
On Fri, 2015-01-23 at 21:08 +0100, Peter Zijlstra wrote:
> On Fri, Jan 23, 2015 at 11:23:36AM -0800, Jason Low wrote:
> > On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
> > > On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> > > > +static void update_gt_cputime(struct
On Fri, Jan 23, 2015 at 10:07:31AM -0800, Jason Low wrote:
> On Fri, 2015-01-23 at 10:33 +0100, Peter Zijlstra wrote:
> > > + .running = ATOMIC_INIT(0), \
> > > + atomic_t running;
> > > + atomic_set(>cputimer.running, 1);
> > > @@ -174,7 +174,7 @@
On Fri, Jan 23, 2015 at 11:23:36AM -0800, Jason Low wrote:
> On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
> > On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> > > +static void update_gt_cputime(struct thread_group_cputimer *a, struct
> > > task_cputime *b)
> > > {
> > > +
On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
> On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> > +static void update_gt_cputime(struct thread_group_cputimer *a, struct
> > task_cputime *b)
> > {
> > + if (b->utime > atomic64_read(>utime))
> > +
On Fri, 2015-01-23 at 10:33 +0100, Peter Zijlstra wrote:
> > + .running = ATOMIC_INIT(0), \
> > + atomic_t running;
> > + atomic_set(>cputimer.running, 1);
> > @@ -174,7 +174,7 @@ static inline bool cputimer_running(struct task_struct
> > *tsk)
>
> + .running = ATOMIC_INIT(0), \
> + atomic_t running;
> + atomic_set(>cputimer.running, 1);
> @@ -174,7 +174,7 @@ static inline bool cputimer_running(struct task_struct
> *tsk)
> + if (!atomic_read(>running))
> + if
On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> +static void update_gt_cputime(struct thread_group_cputimer *a, struct
> task_cputime *b)
> {
> + if (b->utime > atomic64_read(>utime))
> + atomic64_set(>utime, b->utime);
>
> + if (b->stime >
On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
> When running a database workload, we found a scalability issue
> with itimers.
>
> Much of the problem was caused by the thread_group_cputimer spinlock.
> Each time we account for group system/user time, we need to obtain a
>
On Fri, 2015-01-23 at 10:33 +0100, Peter Zijlstra wrote:
+ .running = ATOMIC_INIT(0), \
+ atomic_t running;
+ atomic_set(sig-cputimer.running, 1);
@@ -174,7 +174,7 @@ static inline bool cputimer_running(struct task_struct
*tsk)
+ if
On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
+static void update_gt_cputime(struct thread_group_cputimer *a, struct
task_cputime *b)
{
+ if (b-utime atomic64_read(a-utime))
+ atomic64_set(a-utime,
On Fri, Jan 23, 2015 at 10:07:31AM -0800, Jason Low wrote:
On Fri, 2015-01-23 at 10:33 +0100, Peter Zijlstra wrote:
+ .running = ATOMIC_INIT(0), \
+ atomic_t running;
+ atomic_set(sig-cputimer.running, 1);
@@ -174,7 +174,7 @@ static inline
On Fri, Jan 23, 2015 at 11:23:36AM -0800, Jason Low wrote:
On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
+static void update_gt_cputime(struct thread_group_cputimer *a, struct
task_cputime *b)
{
+ if (b-utime
On Fri, 2015-01-23 at 21:08 +0100, Peter Zijlstra wrote:
On Fri, Jan 23, 2015 at 11:23:36AM -0800, Jason Low wrote:
On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
+static void update_gt_cputime(struct
On Fri, 2015-01-23 at 21:08 +0100, Peter Zijlstra wrote:
On Fri, Jan 23, 2015 at 11:23:36AM -0800, Jason Low wrote:
On Fri, 2015-01-23 at 10:25 +0100, Peter Zijlstra wrote:
On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
+static void update_gt_cputime(struct
On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
When running a database workload, we found a scalability issue
with itimers.
Much of the problem was caused by the thread_group_cputimer spinlock.
Each time we account for group system/user time, we need to obtain a
On Thu, Jan 22, 2015 at 07:31:53PM -0800, Jason Low wrote:
+static void update_gt_cputime(struct thread_group_cputimer *a, struct
task_cputime *b)
{
+ if (b-utime atomic64_read(a-utime))
+ atomic64_set(a-utime, b-utime);
+ if (b-stime atomic64_read(a-stime))
+
+ .running = ATOMIC_INIT(0), \
+ atomic_t running;
+ atomic_set(sig-cputimer.running, 1);
@@ -174,7 +174,7 @@ static inline bool cputimer_running(struct task_struct
*tsk)
+ if (!atomic_read(cputimer-running))
+ if
When running a database workload, we found a scalability issue
with itimers.
Much of the problem was caused by the thread_group_cputimer spinlock.
Each time we account for group system/user time, we need to obtain a
thread_group_cputimer's spinlock to update the timers. On larger
systems (such as
When running a database workload, we found a scalability issue
with itimers.
Much of the problem was caused by the thread_group_cputimer spinlock.
Each time we account for group system/user time, we need to obtain a
thread_group_cputimer's spinlock to update the timers. On larger
systems (such as
22 matches
Mail list logo