On Tue, 2008-02-12 at 10:23 +0100, Mike Galbraith wrote:
> If you plunk a usleep(1) in prior to calling thread_func() does your
> testcase performance change radically? If so, I wonder if the real
> application has the same kind of dependency.
The answer is yes for 2.6.22, and no for 2.6.24, wh
On Mon, 2008-02-11 at 14:31 -0600, Olof Johansson wrote:
> On Mon, Feb 11, 2008 at 08:58:46PM +0100, Mike Galbraith wrote:
> > It shouldn't matter if you yield or not really, that should reduce the
> > number of non-work spin cycles wasted awaiting preemption as threads
> > execute in series (the
On Mon, 2008-02-11 at 16:45 -0500, Bill Davidsen wrote:
> I think the moving to another CPU gets really dependent on the CPU type.
> On a P4+HT the caches are shared, and moving costs almost nothing for
> cache hits, while on CPUs which have other cache layouts the migration
> cost is higher.
Olof Johansson wrote:
However, I fail to understand the goal of the reproducer. Granted it shows
irregularities in the scheduler under such conditions, but what *real*
workload would spend its time sequentially creating then immediately killing
threads, never using more than 2 at a time ?
If th
On Mon, Feb 11, 2008 at 08:58:46PM +0100, Mike Galbraith wrote:
>
> On Mon, 2008-02-11 at 11:26 -0600, Olof Johansson wrote:
> > On Mon, Feb 11, 2008 at 09:15:55AM +0100, Mike Galbraith wrote:
> > > Piddling around with your testcase, it still looks to me like things
> > > improved considerably i
On Mon, 2008-02-11 at 11:26 -0600, Olof Johansson wrote:
> On Mon, Feb 11, 2008 at 09:15:55AM +0100, Mike Galbraith wrote:
> > Piddling around with your testcase, it still looks to me like things
> > improved considerably in latest greatest git. Hopefully that means
> > happiness is in the pipe
On Mon, Feb 11, 2008 at 09:15:55AM +0100, Mike Galbraith wrote:
> Piddling around with your testcase, it still looks to me like things
> improved considerably in latest greatest git. Hopefully that means
> happiness is in the pipe for the real workload... synthetic load is
> definitely happier her
On Sun, 2008-02-10 at 01:00 -0600, Olof Johansson wrote:
> On Sun, Feb 10, 2008 at 07:15:58AM +0100, Willy Tarreau wrote:
>
> > > I agree that the testcase is highly artificial. Unfortunately, it's
> > > not uncommon to see these kind of weird testcases from customers tring
> > > to evaluate new
On Sun, Feb 10, 2008 at 01:00:56AM -0600, Olof Johansson wrote:
> On Sun, Feb 10, 2008 at 07:15:58AM +0100, Willy Tarreau wrote:
> > On Sat, Feb 09, 2008 at 11:29:41PM -0600, Olof Johansson wrote:
> > > 40M:
> > > 2.6.22time 94315 ms
> > > 2.6.23time 107930 ms
> > > 2.6.24
On Sun, Feb 10, 2008 at 07:15:58AM +0100, Willy Tarreau wrote:
> On Sat, Feb 09, 2008 at 11:29:41PM -0600, Olof Johansson wrote:
> > 40M:
> > 2.6.22 time 94315 ms
> > 2.6.23 time 107930 ms
> > 2.6.24 time 113291 ms
> > 2.6.24-git19time 110360 ms
> >
>
On Sat, Feb 09, 2008 at 11:29:41PM -0600, Olof Johansson wrote:
> On Sat, Feb 09, 2008 at 05:19:57PM +0100, Willy Tarreau wrote:
> > On Sat, Feb 09, 2008 at 02:37:39PM +0100, Mike Galbraith wrote:
> > >
> > > On Sat, 2008-02-09 at 12:40 +0100, Willy Tarreau wrote:
> > > > On Sat, Feb 09, 2008 at
On Sat, Feb 09, 2008 at 05:19:57PM +0100, Willy Tarreau wrote:
> On Sat, Feb 09, 2008 at 02:37:39PM +0100, Mike Galbraith wrote:
> >
> > On Sat, 2008-02-09 at 12:40 +0100, Willy Tarreau wrote:
> > > On Sat, Feb 09, 2008 at 11:58:25AM +0100, Mike Galbraith wrote:
> > > >
> > > > On Sat, 2008-02-0
On Sat, 2008-02-09 at 17:19 +0100, Willy Tarreau wrote:
> However, I fail to understand the goal of the reproducer.
(me too, I was trying to figure out what could be expected)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED
On Sat, Feb 09, 2008 at 02:37:39PM +0100, Mike Galbraith wrote:
>
> On Sat, 2008-02-09 at 12:40 +0100, Willy Tarreau wrote:
> > On Sat, Feb 09, 2008 at 11:58:25AM +0100, Mike Galbraith wrote:
> > >
> > > On Sat, 2008-02-09 at 09:03 +0100, Willy Tarreau wrote:
> > >
> > > > How many CPUs do you
On Sat, 2008-02-09 at 12:40 +0100, Willy Tarreau wrote:
> On Sat, Feb 09, 2008 at 11:58:25AM +0100, Mike Galbraith wrote:
> >
> > On Sat, 2008-02-09 at 09:03 +0100, Willy Tarreau wrote:
> >
> > > How many CPUs do you have ?
> >
> > It's a P4/HT, so 1 plus $CHUMP_CHANGE_MAYBE
> >
> > > > 2.6.2
On Sat, Feb 09, 2008 at 11:58:25AM +0100, Mike Galbraith wrote:
>
> On Sat, 2008-02-09 at 09:03 +0100, Willy Tarreau wrote:
>
> > How many CPUs do you have ?
>
> It's a P4/HT, so 1 plus $CHUMP_CHANGE_MAYBE
>
> > > 2.6.25-smp (git today)
> > > time 29 ms
> > > time 61 ms
> > > time 72 ms
> >
>
On Sat, 2008-02-09 at 09:03 +0100, Willy Tarreau wrote:
> How many CPUs do you have ?
It's a P4/HT, so 1 plus $CHUMP_CHANGE_MAYBE
> > 2.6.25-smp (git today)
> > time 29 ms
> > time 61 ms
> > time 72 ms
>
> These ones look rather strange. What type of workload is it ? Can you
> publish the prog
Hi Mike,
On Sat, Feb 09, 2008 at 08:58:39AM +0100, Mike Galbraith wrote:
>
> On Fri, 2008-02-08 at 18:04 -0600, Olof Johansson wrote:
> > Hi,
> >
> > I ended up with a customer benchmark in my lap this week that doesn't
> > do well on recent kernels. :(
> >
> > After cutting it down to a simple
On Fri, 2008-02-08 at 18:04 -0600, Olof Johansson wrote:
> Hi,
>
> I ended up with a customer benchmark in my lap this week that doesn't
> do well on recent kernels. :(
>
> After cutting it down to a simple testcase/microbenchmark, it seems like
> recent kernels don't do as well with short-lived
Olof Johansson wrote:
Hi,
I ended up with a customer benchmark in my lap this week that doesn't
do well on recent kernels. :(
After cutting it down to a simple testcase/microbenchmark, it seems like
recent kernels don't do as well with short-lived threads competing
with the thread it's cloned o
On Sat, Feb 09, 2008 at 01:08:30AM +0100, Ingo Molnar wrote:
>
> * Olof Johansson <[EMAIL PROTECTED]> wrote:
>
> > 2.6.22: 3332 ms
> > 2.6.23: 4397 ms
> > 2.6.24: 8953 ms
> > 2.6.24-git19: 8986 ms
>
> if you enable SCHED_DEBUG, and subtract 4 from the value of
> /proc/sys/kernel/sched_features,
* Olof Johansson <[EMAIL PROTECTED]> wrote:
> 2.6.22: 3332 ms
> 2.6.23: 4397 ms
> 2.6.24: 8953 ms
> 2.6.24-git19: 8986 ms
if you enable SCHED_DEBUG, and subtract 4 from the value of
/proc/sys/kernel/sched_features, does it get any better?
if not, does writing 0 into /proc/sys/kernel/sched_feat
Hi,
I ended up with a customer benchmark in my lap this week that doesn't
do well on recent kernels. :(
After cutting it down to a simple testcase/microbenchmark, it seems like
recent kernels don't do as well with short-lived threads competing
with the thread it's cloned off of. The CFS scheduler
23 matches
Mail list logo