Re: #define HZ 1024 -- negative effects?

2001-04-29 Thread Michael Rothwell

Great. I'm running 4.02. How do I enable "silken mouse"?

Thanks,

-Michael

On 29 Apr 2001 14:44:11 -0700, Jim Gettys wrote:
> The biggest single issue in GUI responsiveness on Linux has been caused
> by XFree86's implementation of mouse tracking in user space.
> 
> On typical UNIX systems, the mouse was often controlled in the kernel
> driver.  Until recently (XFree86 4.0 days), the XFree86 server's reads
> of mouse/keyboard events were not signal driven, so that if the X server
> was loaded, the cursor stopped moving.
> 
> On most (but not all) current XFree86 implementations, this is now
> signal drive, and further the internal X schedular has been reworked to
> make it difficult for a single client to monopolize the X server.
> 
> So the first thing you should try is to make sure you are using an X server
> with this "silken mouse" enabled; anotherwords, run XFree86 4.0x and make
> sure the implementation has it enabled
> 
> There may be more to do in Linux thereafter, but until you've done this, you
> don't get to discuss the matter further
>   - Jim Gettys
> 
> --
> Jim Gettys
> Technology and Corporate Development
> Compaq Computer Corporation
> [EMAIL PROTECTED]
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-29 Thread Jim Gettys

The biggest single issue in GUI responsiveness on Linux has been caused
by XFree86's implementation of mouse tracking in user space.

On typical UNIX systems, the mouse was often controlled in the kernel
driver.  Until recently (XFree86 4.0 days), the XFree86 server's reads
of mouse/keyboard events were not signal driven, so that if the X server
was loaded, the cursor stopped moving.

On most (but not all) current XFree86 implementations, this is now
signal drive, and further the internal X schedular has been reworked to
make it difficult for a single client to monopolize the X server.

So the first thing you should try is to make sure you are using an X server
with this "silken mouse" enabled; anotherwords, run XFree86 4.0x and make
sure the implementation has it enabled

There may be more to do in Linux thereafter, but until you've done this, you
don't get to discuss the matter further
- Jim Gettys

--
Jim Gettys
Technology and Corporate Development
Compaq Computer Corporation
[EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-29 Thread george anzinger

Mike Galbraith wrote:
> 
> On Fri, 27 Apr 2001, Nigel Gamble wrote:
> 
> > On Fri, 27 Apr 2001, Mike Galbraith wrote:
> > > On Fri, 27 Apr 2001, Nigel Gamble wrote:
> > > > > What about SCHED_YIELD and allocating during vm stress times?
> > >
> > > snip
> > >
> > > > A well-written GUI should not be using SCHED_YIELD.  If it is
> > >
> > > I was refering to the gui (or other tasks) allocating memory during
> > > vm stress periods, and running into the yield in __alloc_pages()..
> > > not a voluntary yield.
> >
> > Oh, I see.  Well, if this were causing the problem, then running the GUI
> > at a real-time priority would be a better solution than increasing the
> > clock frequency, since SCHED_YIELD has no effect on real-time tasks
> > unless there are other runnable real-time tasks at the same priority.
> > The call to schedule() would just reschedule the real-time GUI task
> > itself immediately.
> >
> > However, in times of vm stress it is more likely that GUI performance
> > problems would be caused by parts of the GUI having been paged out,
> > rather than by anything which could be helped by scheduling differences.
> 
> Agreed.  I wasn't thinking about swapping, only kswapd not quite keeping
> up with laundering, and then user tasks having to pick up some of the
> load.  Anyway, I've been told that for most values of HZ the slice is
> 50ms, so my reasoning wrt HZ/SCHED_YIELD was wrong.  (begs the question
> why do some archs use higher HZ values?)
> 
Well, almost.  Here is the scaling code:

#if HZ < 200
#define TICK_SCALE(x)   ((x) >> 2)
#elif HZ < 400
#define TICK_SCALE(x)   ((x) >> 1)
#elif HZ < 800
#define TICK_SCALE(x)   (x)
#elif HZ < 1600
#define TICK_SCALE(x)   ((x) << 1)
#else
#define TICK_SCALE(x)   ((x) << 2)
#endif

#define NICE_TO_TICKS(nice) (TICK_SCALE(20-(nice))+1)

This, by the way, is new with 2.4.x.  As to why, it has more to do with
timer resolution than anything else.  Timer resolution is 1/HZ so higher
HZ => better resolution.  Of course, you must pay for it.  Nothing is
free :)  Higher HZ means more interrupts => higher overhead.

George
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-29 Thread george anzinger

Mike Galbraith wrote:
 
 On Fri, 27 Apr 2001, Nigel Gamble wrote:
 
  On Fri, 27 Apr 2001, Mike Galbraith wrote:
   On Fri, 27 Apr 2001, Nigel Gamble wrote:
 What about SCHED_YIELD and allocating during vm stress times?
  
   snip
  
A well-written GUI should not be using SCHED_YIELD.  If it is
  
   I was refering to the gui (or other tasks) allocating memory during
   vm stress periods, and running into the yield in __alloc_pages()..
   not a voluntary yield.
 
  Oh, I see.  Well, if this were causing the problem, then running the GUI
  at a real-time priority would be a better solution than increasing the
  clock frequency, since SCHED_YIELD has no effect on real-time tasks
  unless there are other runnable real-time tasks at the same priority.
  The call to schedule() would just reschedule the real-time GUI task
  itself immediately.
 
  However, in times of vm stress it is more likely that GUI performance
  problems would be caused by parts of the GUI having been paged out,
  rather than by anything which could be helped by scheduling differences.
 
 Agreed.  I wasn't thinking about swapping, only kswapd not quite keeping
 up with laundering, and then user tasks having to pick up some of the
 load.  Anyway, I've been told that for most values of HZ the slice is
 50ms, so my reasoning wrt HZ/SCHED_YIELD was wrong.  (begs the question
 why do some archs use higher HZ values?)
 
Well, almost.  Here is the scaling code:

#if HZ  200
#define TICK_SCALE(x)   ((x)  2)
#elif HZ  400
#define TICK_SCALE(x)   ((x)  1)
#elif HZ  800
#define TICK_SCALE(x)   (x)
#elif HZ  1600
#define TICK_SCALE(x)   ((x)  1)
#else
#define TICK_SCALE(x)   ((x)  2)
#endif

#define NICE_TO_TICKS(nice) (TICK_SCALE(20-(nice))+1)

This, by the way, is new with 2.4.x.  As to why, it has more to do with
timer resolution than anything else.  Timer resolution is 1/HZ so higher
HZ = better resolution.  Of course, you must pay for it.  Nothing is
free :)  Higher HZ means more interrupts = higher overhead.

George
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-29 Thread Jim Gettys

The biggest single issue in GUI responsiveness on Linux has been caused
by XFree86's implementation of mouse tracking in user space.

On typical UNIX systems, the mouse was often controlled in the kernel
driver.  Until recently (XFree86 4.0 days), the XFree86 server's reads
of mouse/keyboard events were not signal driven, so that if the X server
was loaded, the cursor stopped moving.

On most (but not all) current XFree86 implementations, this is now
signal drive, and further the internal X schedular has been reworked to
make it difficult for a single client to monopolize the X server.

So the first thing you should try is to make sure you are using an X server
with this silken mouse enabled; anotherwords, run XFree86 4.0x and make
sure the implementation has it enabled

There may be more to do in Linux thereafter, but until you've done this, you
don't get to discuss the matter further
- Jim Gettys

--
Jim Gettys
Technology and Corporate Development
Compaq Computer Corporation
[EMAIL PROTECTED]

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-29 Thread Michael Rothwell

Great. I'm running 4.02. How do I enable silken mouse?

Thanks,

-Michael

On 29 Apr 2001 14:44:11 -0700, Jim Gettys wrote:
 The biggest single issue in GUI responsiveness on Linux has been caused
 by XFree86's implementation of mouse tracking in user space.
 
 On typical UNIX systems, the mouse was often controlled in the kernel
 driver.  Until recently (XFree86 4.0 days), the XFree86 server's reads
 of mouse/keyboard events were not signal driven, so that if the X server
 was loaded, the cursor stopped moving.
 
 On most (but not all) current XFree86 implementations, this is now
 signal drive, and further the internal X schedular has been reworked to
 make it difficult for a single client to monopolize the X server.
 
 So the first thing you should try is to make sure you are using an X server
 with this silken mouse enabled; anotherwords, run XFree86 4.0x and make
 sure the implementation has it enabled
 
 There may be more to do in Linux thereafter, but until you've done this, you
 don't get to discuss the matter further
   - Jim Gettys
 
 --
 Jim Gettys
 Technology and Corporate Development
 Compaq Computer Corporation
 [EMAIL PROTECTED]
 
 -
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-28 Thread Guus Sliepen

On Wed, Apr 25, 2001 at 10:02:26PM -0400, Dan Maas wrote:

> > Are there any negative effects of editing include/asm/param.h to change
> > HZ from 100 to 1024? Or any other number? This has been suggested as a
> > way to improve the responsiveness of the GUI on a Linux system.
[...]
> Of course, the appearance of better interactivity could just be a placebo
> effect. Double-blind trials, anyone? =)

I tried HZ=1024 on my i386 kernel, to check two things. One was a timer
routine. The performance of the timer routine depends heavily on the
granularity of the nanosleep() or select() system call. Since those calls
always block at least 1/HZ seconds, the timer precision indeed increased by a
factor 10 when I changed the HZ value from 100 to 1024.

However, another thing I wanted to do was to generate profiling statistics for
freesci. Profiling is done with 1/HZ granularity. Any subroutine in a program
executed in less than 1/HZ cannot be profiled correctly (for example a routine
that executes in 1 nanosecond and one that needs 1/HZ/2 seconds both show up
as taking 1 sample).

Now, you would think that profiling would be a lot better with HZ=1024.
However, the program didn't even run anymore! The reason is that some system
calls are being interupted by SIGPROF every 1/HZ, and return something like
ERESTARTSYS to the libraries. The libraries then try to restart the system
call but a SIGPROF is bound to follow shortly, again interrupting the system
call, and so on...

---
Met vriendelijke groet / with kind regards,
  Guus Sliepen <[EMAIL PROTECTED]>
---
See also: http://tinc.nl.linux.org/
  http://www.kernelbench.org/
---

 PGP signature


Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Mike Galbraith

On Fri, 27 Apr 2001, Nigel Gamble wrote:

> On Fri, 27 Apr 2001, Mike Galbraith wrote:
> > On Fri, 27 Apr 2001, Nigel Gamble wrote:
> > > > What about SCHED_YIELD and allocating during vm stress times?
> >
> > snip
> >
> > > A well-written GUI should not be using SCHED_YIELD.  If it is
> >
> > I was refering to the gui (or other tasks) allocating memory during
> > vm stress periods, and running into the yield in __alloc_pages()..
> > not a voluntary yield.
>
> Oh, I see.  Well, if this were causing the problem, then running the GUI
> at a real-time priority would be a better solution than increasing the
> clock frequency, since SCHED_YIELD has no effect on real-time tasks
> unless there are other runnable real-time tasks at the same priority.
> The call to schedule() would just reschedule the real-time GUI task
> itself immediately.
>
> However, in times of vm stress it is more likely that GUI performance
> problems would be caused by parts of the GUI having been paged out,
> rather than by anything which could be helped by scheduling differences.

Agreed.  I wasn't thinking about swapping, only kswapd not quite keeping
up with laundering, and then user tasks having to pick up some of the
load.  Anyway, I've been told that for most values of HZ the slice is
50ms, so my reasoning wrt HZ/SCHED_YIELD was wrong.  (begs the question
why do some archs use higher HZ values?)

-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Nigel Gamble

On Fri, 27 Apr 2001, Mike Galbraith wrote:
> On Fri, 27 Apr 2001, Nigel Gamble wrote:
> > > What about SCHED_YIELD and allocating during vm stress times?
> 
> snip
> 
> > A well-written GUI should not be using SCHED_YIELD.  If it is
> 
> I was refering to the gui (or other tasks) allocating memory during
> vm stress periods, and running into the yield in __alloc_pages()..
> not a voluntary yield.

Oh, I see.  Well, if this were causing the problem, then running the GUI
at a real-time priority would be a better solution than increasing the
clock frequency, since SCHED_YIELD has no effect on real-time tasks
unless there are other runnable real-time tasks at the same priority.
The call to schedule() would just reschedule the real-time GUI task
itself immediately.

However, in times of vm stress it is more likely that GUI performance
problems would be caused by parts of the GUI having been paged out,
rather than by anything which could be helped by scheduling differences.

Nigel Gamble[EMAIL PROTECTED]
Mountain View, CA, USA. http://www.nrg.org/

MontaVista Software [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Mike Galbraith

On Fri, 27 Apr 2001, Nigel Gamble wrote:

> > What about SCHED_YIELD and allocating during vm stress times?

snip

> A well-written GUI should not be using SCHED_YIELD.  If it is

I was refering to the gui (or other tasks) allocating memory during
vm stress periods, and running into the yield in __alloc_pages()..
not a voluntary yield.

-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Nigel Gamble

On Fri, 27 Apr 2001, Mike Galbraith wrote:
> > Rubbish.  Whenever a higher-priority thread than the current
> > thread becomes runnable the current thread will get preempted,
> > regardless of whether its timeslices is over or not.
> 
> What about SCHED_YIELD and allocating during vm stress times?
> 
> Say you have only two tasks.  One is the gui and is allocating,
> the other is a pure compute task.  The compute task doesn't do
> anything which will cause preemtion except use up it's slice.
> The gui may yield the cpu but the compute job never will.
> 
> (The gui won't _become_ runnable if that matters.  It's marked
> as running, has yielded it's remaining slice and went to sleep..
> with it's eyes open;)

A well-written GUI should not be using SCHED_YIELD.  If it is
"allocating", anything, it won't be using SCHED_YIELD or be marked
runnable, it will be blocked, waiting until the resource becomes
available.  When that happens, it will preempt the compute task (if its
priority is high enough, which is very likely - and can be assured if
it's running at a real-time priority as I suggested earlier).

Nigel Gamble[EMAIL PROTECTED]
Mountain View, CA, USA. http://www.nrg.org/

MontaVista Software [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Dan Mann

When you change the #define HZ setting in param.h, what effect does that
have on the CLOCKS_PER_SEC?  Are you really going to get a different amount
of slice time or is the is there another kernel source file (timex.h) that
just puts you back anyway?


Dan
- Original Message -
From: "Mike Galbraith" <[EMAIL PROTECTED]>
To: "linux-kernel" <[EMAIL PROTECTED]>
Sent: Friday, April 27, 2001 6:04 AM
Subject: Re: #define HZ 1024 -- negative effects?


> > > I have not tried it, but I would think that setting HZ to 1024
> > > should make a big improvement in responsiveness.
> > >
> > > Currently, the time slice allocated to a standard Linux
> > > process is 5*HZ, or 50ms when HZ is 100.  That means that you
> > > will notice keystrokes being echoed slowly in X when you have
> > > just one or two running processes,
> >
> > Rubbish.  Whenever a higher-priority thread than the current
> > thread becomes runnable the current thread will get preempted,
> > regardless of whether its timeslices is over or not.
>
> (hmm.. noone mentioned this, and it doesn't look like anyone is
> going to volunteer to be my proxy [see ionut's .sig].  oh well)
>
> What about SCHED_YIELD and allocating during vm stress times?
>
> Say you have only two tasks.  One is the gui and is allocating,
> the other is a pure compute task.  The compute task doesn't do
> anything which will cause preemtion except use up it's slice.
> The gui may yield the cpu but the compute job never will.
>
> (The gui won't _become_ runnable if that matters.  It's marked
> as running, has yielded it's remaining slice and went to sleep..
> with it's eyes open;)
>
> Since increasing HZ reduces timeslice, the maximum amount of time
> that you can yield is also decreased.  In the above case, isn't
> it true that changing HZ from 100 to 1000 decreases sleep time
> for the yielder from 50ms to 5ms if the compute task is at the
> start of it's slice when the gui yields?
>
> It seems likely that even if you're running a normal mix of tasks,
> that the gui, big fat oinker that the things tend to be, will yield
> much more often than the slimmer tasks it's competing with for cpu
> because it's likely allocating/yielding much more often.
>
> It follows that increasing HZ must decrease latency for the gui if
> there's any vm stress.. and that's the time that gui responsivness
> complaints usually refer to.  Throughput for yielding tasks should
> also increase with a larger HZ value because the number of yields
> is constant (tied to the number of allocations) but the amount of
> cpu time lost per yield is smaller.
>
> Correct?
>
> (if big fat tasks _don't_ generally allocate more than slim tasks,
> my refering to ionuts .sig was most unfortunate.  i hope it's safe
> to assume that you can't become that obese without eating a lot;)
>
>   -Mike
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Mike Galbraith

> > I have not tried it, but I would think that setting HZ to 1024
> > should make a big improvement in responsiveness.
> >
> > Currently, the time slice allocated to a standard Linux
> > process is 5*HZ, or 50ms when HZ is 100.  That means that you
> > will notice keystrokes being echoed slowly in X when you have
> > just one or two running processes,
>
> Rubbish.  Whenever a higher-priority thread than the current
> thread becomes runnable the current thread will get preempted,
> regardless of whether its timeslices is over or not.

(hmm.. noone mentioned this, and it doesn't look like anyone is
going to volunteer to be my proxy [see ionut's .sig].  oh well)

What about SCHED_YIELD and allocating during vm stress times?

Say you have only two tasks.  One is the gui and is allocating,
the other is a pure compute task.  The compute task doesn't do
anything which will cause preemtion except use up it's slice.
The gui may yield the cpu but the compute job never will.

(The gui won't _become_ runnable if that matters.  It's marked
as running, has yielded it's remaining slice and went to sleep..
with it's eyes open;)

Since increasing HZ reduces timeslice, the maximum amount of time
that you can yield is also decreased.  In the above case, isn't
it true that changing HZ from 100 to 1000 decreases sleep time
for the yielder from 50ms to 5ms if the compute task is at the
start of it's slice when the gui yields?

It seems likely that even if you're running a normal mix of tasks,
that the gui, big fat oinker that the things tend to be, will yield
much more often than the slimmer tasks it's competing with for cpu
because it's likely allocating/yielding much more often.

It follows that increasing HZ must decrease latency for the gui if
there's any vm stress.. and that's the time that gui responsivness
complaints usually refer to.  Throughput for yielding tasks should
also increase with a larger HZ value because the number of yields
is constant (tied to the number of allocations) but the amount of
cpu time lost per yield is smaller.

Correct?

(if big fat tasks _don't_ generally allocate more than slim tasks,
my refering to ionuts .sig was most unfortunate.  i hope it's safe
to assume that you can't become that obese without eating a lot;)

-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Mike Galbraith

  I have not tried it, but I would think that setting HZ to 1024
  should make a big improvement in responsiveness.
 
  Currently, the time slice allocated to a standard Linux
  process is 5*HZ, or 50ms when HZ is 100.  That means that you
  will notice keystrokes being echoed slowly in X when you have
  just one or two running processes,

 Rubbish.  Whenever a higher-priority thread than the current
 thread becomes runnable the current thread will get preempted,
 regardless of whether its timeslices is over or not.

(hmm.. noone mentioned this, and it doesn't look like anyone is
going to volunteer to be my proxy [see ionut's .sig].  oh well)

What about SCHED_YIELD and allocating during vm stress times?

Say you have only two tasks.  One is the gui and is allocating,
the other is a pure compute task.  The compute task doesn't do
anything which will cause preemtion except use up it's slice.
The gui may yield the cpu but the compute job never will.

(The gui won't _become_ runnable if that matters.  It's marked
as running, has yielded it's remaining slice and went to sleep..
with it's eyes open;)

Since increasing HZ reduces timeslice, the maximum amount of time
that you can yield is also decreased.  In the above case, isn't
it true that changing HZ from 100 to 1000 decreases sleep time
for the yielder from 50ms to 5ms if the compute task is at the
start of it's slice when the gui yields?

It seems likely that even if you're running a normal mix of tasks,
that the gui, big fat oinker that the things tend to be, will yield
much more often than the slimmer tasks it's competing with for cpu
because it's likely allocating/yielding much more often.

It follows that increasing HZ must decrease latency for the gui if
there's any vm stress.. and that's the time that gui responsivness
complaints usually refer to.  Throughput for yielding tasks should
also increase with a larger HZ value because the number of yields
is constant (tied to the number of allocations) but the amount of
cpu time lost per yield is smaller.

Correct?

(if big fat tasks _don't_ generally allocate more than slim tasks,
my refering to ionuts .sig was most unfortunate.  i hope it's safe
to assume that you can't become that obese without eating a lot;)

-Mike

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Dan Mann

When you change the #define HZ setting in param.h, what effect does that
have on the CLOCKS_PER_SEC?  Are you really going to get a different amount
of slice time or is the is there another kernel source file (timex.h) that
just puts you back anyway?


Dan
- Original Message -
From: Mike Galbraith [EMAIL PROTECTED]
To: linux-kernel [EMAIL PROTECTED]
Sent: Friday, April 27, 2001 6:04 AM
Subject: Re: #define HZ 1024 -- negative effects?


   I have not tried it, but I would think that setting HZ to 1024
   should make a big improvement in responsiveness.
  
   Currently, the time slice allocated to a standard Linux
   process is 5*HZ, or 50ms when HZ is 100.  That means that you
   will notice keystrokes being echoed slowly in X when you have
   just one or two running processes,
 
  Rubbish.  Whenever a higher-priority thread than the current
  thread becomes runnable the current thread will get preempted,
  regardless of whether its timeslices is over or not.

 (hmm.. noone mentioned this, and it doesn't look like anyone is
 going to volunteer to be my proxy [see ionut's .sig].  oh well)

 What about SCHED_YIELD and allocating during vm stress times?

 Say you have only two tasks.  One is the gui and is allocating,
 the other is a pure compute task.  The compute task doesn't do
 anything which will cause preemtion except use up it's slice.
 The gui may yield the cpu but the compute job never will.

 (The gui won't _become_ runnable if that matters.  It's marked
 as running, has yielded it's remaining slice and went to sleep..
 with it's eyes open;)

 Since increasing HZ reduces timeslice, the maximum amount of time
 that you can yield is also decreased.  In the above case, isn't
 it true that changing HZ from 100 to 1000 decreases sleep time
 for the yielder from 50ms to 5ms if the compute task is at the
 start of it's slice when the gui yields?

 It seems likely that even if you're running a normal mix of tasks,
 that the gui, big fat oinker that the things tend to be, will yield
 much more often than the slimmer tasks it's competing with for cpu
 because it's likely allocating/yielding much more often.

 It follows that increasing HZ must decrease latency for the gui if
 there's any vm stress.. and that's the time that gui responsivness
 complaints usually refer to.  Throughput for yielding tasks should
 also increase with a larger HZ value because the number of yields
 is constant (tied to the number of allocations) but the amount of
 cpu time lost per yield is smaller.

 Correct?

 (if big fat tasks _don't_ generally allocate more than slim tasks,
 my refering to ionuts .sig was most unfortunate.  i hope it's safe
 to assume that you can't become that obese without eating a lot;)

   -Mike

 -
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Nigel Gamble

On Fri, 27 Apr 2001, Mike Galbraith wrote:
  Rubbish.  Whenever a higher-priority thread than the current
  thread becomes runnable the current thread will get preempted,
  regardless of whether its timeslices is over or not.
 
 What about SCHED_YIELD and allocating during vm stress times?
 
 Say you have only two tasks.  One is the gui and is allocating,
 the other is a pure compute task.  The compute task doesn't do
 anything which will cause preemtion except use up it's slice.
 The gui may yield the cpu but the compute job never will.
 
 (The gui won't _become_ runnable if that matters.  It's marked
 as running, has yielded it's remaining slice and went to sleep..
 with it's eyes open;)

A well-written GUI should not be using SCHED_YIELD.  If it is
allocating, anything, it won't be using SCHED_YIELD or be marked
runnable, it will be blocked, waiting until the resource becomes
available.  When that happens, it will preempt the compute task (if its
priority is high enough, which is very likely - and can be assured if
it's running at a real-time priority as I suggested earlier).

Nigel Gamble[EMAIL PROTECTED]
Mountain View, CA, USA. http://www.nrg.org/

MontaVista Software [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Mike Galbraith

On Fri, 27 Apr 2001, Nigel Gamble wrote:

  What about SCHED_YIELD and allocating during vm stress times?

snip

 A well-written GUI should not be using SCHED_YIELD.  If it is

I was refering to the gui (or other tasks) allocating memory during
vm stress periods, and running into the yield in __alloc_pages()..
not a voluntary yield.

-Mike

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Nigel Gamble

On Fri, 27 Apr 2001, Mike Galbraith wrote:
 On Fri, 27 Apr 2001, Nigel Gamble wrote:
   What about SCHED_YIELD and allocating during vm stress times?
 
 snip
 
  A well-written GUI should not be using SCHED_YIELD.  If it is
 
 I was refering to the gui (or other tasks) allocating memory during
 vm stress periods, and running into the yield in __alloc_pages()..
 not a voluntary yield.

Oh, I see.  Well, if this were causing the problem, then running the GUI
at a real-time priority would be a better solution than increasing the
clock frequency, since SCHED_YIELD has no effect on real-time tasks
unless there are other runnable real-time tasks at the same priority.
The call to schedule() would just reschedule the real-time GUI task
itself immediately.

However, in times of vm stress it is more likely that GUI performance
problems would be caused by parts of the GUI having been paged out,
rather than by anything which could be helped by scheduling differences.

Nigel Gamble[EMAIL PROTECTED]
Mountain View, CA, USA. http://www.nrg.org/

MontaVista Software [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-27 Thread Mike Galbraith

On Fri, 27 Apr 2001, Nigel Gamble wrote:

 On Fri, 27 Apr 2001, Mike Galbraith wrote:
  On Fri, 27 Apr 2001, Nigel Gamble wrote:
What about SCHED_YIELD and allocating during vm stress times?
 
  snip
 
   A well-written GUI should not be using SCHED_YIELD.  If it is
 
  I was refering to the gui (or other tasks) allocating memory during
  vm stress periods, and running into the yield in __alloc_pages()..
  not a voluntary yield.

 Oh, I see.  Well, if this were causing the problem, then running the GUI
 at a real-time priority would be a better solution than increasing the
 clock frequency, since SCHED_YIELD has no effect on real-time tasks
 unless there are other runnable real-time tasks at the same priority.
 The call to schedule() would just reschedule the real-time GUI task
 itself immediately.

 However, in times of vm stress it is more likely that GUI performance
 problems would be caused by parts of the GUI having been paged out,
 rather than by anything which could be helped by scheduling differences.

Agreed.  I wasn't thinking about swapping, only kswapd not quite keeping
up with laundering, and then user tasks having to pick up some of the
load.  Anyway, I've been told that for most values of HZ the slice is
50ms, so my reasoning wrt HZ/SCHED_YIELD was wrong.  (begs the question
why do some archs use higher HZ values?)

-Mike

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-26 Thread Dan Mann

So, the kernel really doesn't have much of an effect on the interactivity of
the gui?  I really don't think there is a problem right now at the
console..but I am curious to help it at the gui level.  Does it have
anything to do with the way the mouse is handled? I've applied the mvista
preemptive + low latency patch, and my subjective experience is that it
"feels" the same.  I'd just like to help and I'll patch the hell out of my
kernel if you need someone to test it.  I don't really care if my hardrive
catches on fire as long as it doesn't burn my house down :-)

Dan

- Original Message -
From: "Rik van Riel" <[EMAIL PROTECTED]>
To: "Adam J. Richter" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, April 26, 2001 2:31 PM
Subject: Re: #define HZ 1024 -- negative effects?


> On Thu, 26 Apr 2001, Adam J. Richter wrote:
>
> > I have not tried it, but I would think that setting HZ to 1024
> > should make a big improvement in responsiveness.
> >
> > Currently, the time slice allocated to a standard Linux
> > process is 5*HZ, or 50ms when HZ is 100.  That means that you
> > will notice keystrokes being echoed slowly in X when you have
> > just one or two running processes,
>
> Rubbish.  Whenever a higher-priority thread than the current
> thread becomes runnable the current thread will get preempted,
> regardless of whether its timeslices is over or not.
>
> And please, DO try things before proposing a radical change
> to the kernel ;)
>
> regards,
>
> Rik
> --
> Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml
>
> Virtual memory is like a game you can't win;
> However, without VM there's truly nothing to lose...
>
> http://www.surriel.com/
> http://www.conectiva.com/ http://distro.conectiva.com/
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [EMAIL PROTECTED]
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-26 Thread Rik van Riel

On Thu, 26 Apr 2001, Adam J. Richter wrote:

>   I have not tried it, but I would think that setting HZ to 1024
> should make a big improvement in responsiveness.
>
>   Currently, the time slice allocated to a standard Linux
> process is 5*HZ, or 50ms when HZ is 100.  That means that you
> will notice keystrokes being echoed slowly in X when you have
> just one or two running processes,

Rubbish.  Whenever a higher-priority thread than the current
thread becomes runnable the current thread will get preempted,
regardless of whether its timeslices is over or not.

And please, DO try things before proposing a radical change
to the kernel ;)

regards,

Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

http://www.surriel.com/
http://www.conectiva.com/   http://distro.conectiva.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-26 Thread Adam J. Richter

I have not tried it, but I would think that setting HZ to 1024
should make a big improvement in responsiveness.

Currently, the time slice allocated to a standard Linux process
is 5*HZ, or 50ms when HZ is 100.  That means that you will notice
keystrokes being echoed slowly in X when you have just one or two running
processes, no matter how fast your CPU is, assuming these processes 
do not complete in that time.  Setting HZ to 1000 should improve that
a lot, and the cost of the extra context switches should still be quite
small in comparison to time slice length (a 1ms time slize = 1 million
cycles on a 1GHz processor or a maximum of 532kB of memory bus
utilization on a PC-133 bus that transfer 8 bytes on an averge of every
two cycles based to 5-1-1-1 memory timing).

I would think this would be particularly noticible for internet
service providers that offer shell accounts or VNC accounts (like
WorkSpot and LastFoot).

A few of other approaches to consider if one is feeling
more ambitious are:
1. Make the time slice size scale to the number of
   currently runnable processes (more precisely, threads)
   divided by number of CPU's.  I posted something about this
   a week or two ago.  This way, responsiveness is maintained,
   but people who are worried about the extra context switch
   and caching effects can rest assured that this shorter time slices
   would only happen when responsiveness would otherwise be bad.
2. Like #1, but only shrink the time slices when at least
   one of the runnable processes is running at regular or high
   CPU priority.
3. Have the current process give up the CPU as soon as another
   process awaiting the CPU has a higher current->count value.
   That would increase the number of context switches like
   increasing HZ by 5X (with basically the same trade-offs),
   but without increasing the number of timer interrupts.
   By itself, this is probably not worth the complexity.
4. Similar to #3, but only switch on current->count!=0 when
   another process has just become unblocked.
5. I haven't looked at the code closely enough yet, but I tend
   to wonder about the usefulness of having "ticks" when you have
   a real time clock and avoid unnecessary "tick" interrupts by
   just accounting based on microroseconds or something.  I
   understand there may be issues of inaccuracy due to not knowing
   exactly where you are in the current RTC tick, and the cost
   of the unnecessary tick interrupts is probably pretty small.
   I mention this just for completeness.

Adam J. Richter __ __   4880 Stevens Creek Blvd, Suite 104
[EMAIL PROTECTED] \ /  San Jose, California 95129-1034
+1 408 261-6630 | g g d r a s i l   United States of America
fax +1 408 261-6631  "Free Software For The Rest Of Us."
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-26 Thread Rik van Riel

On Thu, 26 Apr 2001, Adam J. Richter wrote:

   I have not tried it, but I would think that setting HZ to 1024
 should make a big improvement in responsiveness.

   Currently, the time slice allocated to a standard Linux
 process is 5*HZ, or 50ms when HZ is 100.  That means that you
 will notice keystrokes being echoed slowly in X when you have
 just one or two running processes,

Rubbish.  Whenever a higher-priority thread than the current
thread becomes runnable the current thread will get preempted,
regardless of whether its timeslices is over or not.

And please, DO try things before proposing a radical change
to the kernel ;)

regards,

Rik
--
Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...

http://www.surriel.com/
http://www.conectiva.com/   http://distro.conectiva.com/

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-26 Thread Dan Mann

So, the kernel really doesn't have much of an effect on the interactivity of
the gui?  I really don't think there is a problem right now at the
console..but I am curious to help it at the gui level.  Does it have
anything to do with the way the mouse is handled? I've applied the mvista
preemptive + low latency patch, and my subjective experience is that it
feels the same.  I'd just like to help and I'll patch the hell out of my
kernel if you need someone to test it.  I don't really care if my hardrive
catches on fire as long as it doesn't burn my house down :-)

Dan

- Original Message -
From: Rik van Riel [EMAIL PROTECTED]
To: Adam J. Richter [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, April 26, 2001 2:31 PM
Subject: Re: #define HZ 1024 -- negative effects?


 On Thu, 26 Apr 2001, Adam J. Richter wrote:

  I have not tried it, but I would think that setting HZ to 1024
  should make a big improvement in responsiveness.
 
  Currently, the time slice allocated to a standard Linux
  process is 5*HZ, or 50ms when HZ is 100.  That means that you
  will notice keystrokes being echoed slowly in X when you have
  just one or two running processes,

 Rubbish.  Whenever a higher-priority thread than the current
 thread becomes runnable the current thread will get preempted,
 regardless of whether its timeslices is over or not.

 And please, DO try things before proposing a radical change
 to the kernel ;)

 regards,

 Rik
 --
 Linux MM bugzilla: http://linux-mm.org/bugzilla.shtml

 Virtual memory is like a game you can't win;
 However, without VM there's truly nothing to lose...

 http://www.surriel.com/
 http://www.conectiva.com/ http://distro.conectiva.com/

 -
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to [EMAIL PROTECTED]
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-25 Thread Mike Galbraith

On Wed, 25 Apr 2001, Dan Maas wrote:

> The only other possibility I can think of is a scheduler anomaly. A thread
> arose on this list recently about strange scheduling behavior of processes
> using local IPC - even though one process had readable data pending, the
> kernel would still go idle until the next timer interrupt. If this is the
> case, then HZ=1024 would kick the system back into action more quickly...

Hmm.  I've caught tasks looping here (experimental tree but..) with
interrupts enabled, but schedule never being called despite having
many runnable tasks.

-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-25 Thread Werner Puschitz

On Wed, 25 Apr 2001, Dan Maas wrote:

> > Are there any negative effects of editing include/asm/param.h to change
> > HZ from 100 to 1024? Or any other number? This has been suggested as a
> > way to improve the responsiveness of the GUI on a Linux system.
>
> I have also played around with HZ=1024 and wondered how it affects
> interactivity. I don't quite understand why it could help - one thing I've
> learned looking at kernel traces (LTT) is that interactive processes very,
> very rarely eat up their whole timeslice (even hogs like X). So more
> frequent timer interrupts shouldn't have much of an effect...
>
> If you are burning CPU doing stuff like long compiles, then the increased HZ
> might make the system appear more responsive because the CPU hog gets
> pre-empted more often. However, you could get the same result just by
> running the task 'nice'ly...

A tradeoff of having better system responsiveness by having the kernel to
check more often if a running process should be preempted is that the CPU
spends more time in Kernel Mode and less time in User Mode.
And as a consequence, user programs run slower.

Regards,
Werner



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-25 Thread Dan Maas

> Are there any negative effects of editing include/asm/param.h to change
> HZ from 100 to 1024? Or any other number? This has been suggested as a
> way to improve the responsiveness of the GUI on a Linux system.

I have also played around with HZ=1024 and wondered how it affects
interactivity. I don't quite understand why it could help - one thing I've
learned looking at kernel traces (LTT) is that interactive processes very,
very rarely eat up their whole timeslice (even hogs like X). So more
frequent timer interrupts shouldn't have much of an effect...

If you are burning CPU doing stuff like long compiles, then the increased HZ
might make the system appear more responsive because the CPU hog gets
pre-empted more often. However, you could get the same result just by
running the task 'nice'ly...

The only other possibility I can think of is a scheduler anomaly. A thread
arose on this list recently about strange scheduling behavior of processes
using local IPC - even though one process had readable data pending, the
kernel would still go idle until the next timer interrupt. If this is the
case, then HZ=1024 would kick the system back into action more quickly...

Of course, the appearance of better interactivity could just be a placebo
effect. Double-blind trials, anyone? =)

Regards,
Dan

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects

2001-04-25 Thread Michael Rothwell

Well, for kicks, I tried setting HZ to 1024 with 2.2.19. It seemed a 
little more responsive, but that could be psychosomatic. :) I did notice 
that I was unable to sync my palm pilot until I set it back to 100. 
YMMV. The most useful "performace" tweak for a GUI that I've come across is:

#define _SHM_ID_BITS10

... if y ou're running Gnome and/or Gtk, because of its appetite for 
lots of SHM segments.

-Michael

Mark Hahn wrote:

>>> Are there any negative effects of editing include/asm/param.h to change 
>>> HZ from 100 to 1024? Or any other number? This has been suggested as a 
>>> way to improve the responsiveness of the GUI on a Linux system. Does it 
>> 
> ...
> 
>> Why not just run the X server at a realtime priority?  Then it will get
>> to respond to existing events, such as keyboard and mouse input,
>> promptly without creating lots of superfluous extra clock interrupts.
>> I think you will find this is a better solution.
> 
> 
> it's surprisingly ineffective; usually, if someone thinks responsiveness
> is bad, there's a problem with the system.  for instance, if the system
> is swapping, setting X (and wm, and clients) to RT makes little difference,
> since the kernel is stealing pages from them, regardless of their scheduling
> priority.
> 
> if you're curious, you might be interested in two toy programs
> I've attached.  one is "setrealtime", which will make a pid RT, or else act
> as a wrapper (ala /bin/time).  I have it installed suid root on my system,
> though this is rather dangerous if your have lusers around.  the second is a
> simple memory-hog: mmaps a bunch of ram, and keeps it active (printing out a
> handy measure of how long it took to touch its pages...)
> 
> regards, mark hahn.
> 
> 
> 
> 
> #include 
> #include 
> #include 
> #include 
> #include 
> 
> volatile unsigned sink;
> 
> double second() {
> struct timeval tv;
> gettimeofday(,0);
> return tv.tv_sec + 1e-6 * tv.tv_usec;
> }
> 
> int
> main(int argc, char *argv[]) {
> int doWrite = 1;
> unsigned size = 80 * 1024 * 1024;
> 
> int letter;
> while ((letter = getopt(argc, argv, "s:wrvh?" )) != -1) {
>   switch(letter) {
>   case 's': size = atoi(optarg) * 1024 * 1024; break;
>   case 'w': doWrite = 1; break;
>   default:
>   fprintf(stderr,"useup [-s mb][-w]\n");
>   exit(1);
>   }
> }
> int *base = (int*) mmap(0, size, 
> PROT_READ|PROT_WRITE, 
> MAP_ANONYMOUS|MAP_PRIVATE, 0, 0);
> if (base == MAP_FAILED) {
>   perror("mmap failed");
>   exit(1);
> }
> 
> int *end = base + size/4;
> 
> while (1) {
>   double start = second();
>   if (doWrite)
>   for (int *p = base; p < end; p += 1024)
>   *p = 0;
>   else {
>   unsigned sum = 0;
>   for (int *p = base; p < end; p += 1024)
>   sum += *p;
>   sink = sum;
>   }
>   printf("%f\n",1000*(second() - start));
> }
> }
> 
> 
> 
> 
> #include 
> #include 
> #include 
> #include 
> 
> int
> main(int argc, char *argv[]) {
> int uid = getuid();
> int pid = atoi(argv[1]);
> int sched_fifo_min, sched_fifo_max;
> static struct sched_param sched_parms;
> 
> if (!pid)
>   pid = getpid();
> 
> sched_fifo_min = sched_get_priority_min(SCHED_FIFO);
> sched_fifo_max = sched_get_priority_max(SCHED_FIFO);
> sched_parms.sched_priority = sched_fifo_min + 1;
> 
> if (sched_setscheduler(pid, SCHED_FIFO, _parms) == -1)
> perror("cannot set realtime scheduling policy");
> 
> if (uid)
>   setuid(uid);
> 
> if (pid == getpid())
>   execvp(argv[1],[1]);
> return 0;
> }
> useup.c
> 
> Content-Description:
> 
> useup.c
> Content-Type:
> 
> TEXT/PLAIN
> Content-Encoding:
> 
> BASE64
> 
> 
> 
> setrealtime.c
> 
> Content-Description:
> 
> setrealtime.c
> Content-Type:
> 
> TEXT/PLAIN
> Content-Encoding:
> 
> BASE64
> 
> 


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects

2001-04-25 Thread Mark Hahn

> > Are there any negative effects of editing include/asm/param.h to change 
> > HZ from 100 to 1024? Or any other number? This has been suggested as a 
> > way to improve the responsiveness of the GUI on a Linux system. Does it 
...
> Why not just run the X server at a realtime priority?  Then it will get
> to respond to existing events, such as keyboard and mouse input,
> promptly without creating lots of superfluous extra clock interrupts.
> I think you will find this is a better solution.

it's surprisingly ineffective; usually, if someone thinks responsiveness
is bad, there's a problem with the system.  for instance, if the system
is swapping, setting X (and wm, and clients) to RT makes little difference,
since the kernel is stealing pages from them, regardless of their scheduling
priority.

if you're curious, you might be interested in two toy programs
I've attached.  one is "setrealtime", which will make a pid RT, or else act
as a wrapper (ala /bin/time).  I have it installed suid root on my system,
though this is rather dangerous if your have lusers around.  the second is a
simple memory-hog: mmaps a bunch of ram, and keeps it active (printing out a
handy measure of how long it took to touch its pages...)

regards, mark hahn.


#include 
#include 
#include 
#include 
#include 

volatile unsigned sink;

double second() {
struct timeval tv;
gettimeofday(,0);
return tv.tv_sec + 1e-6 * tv.tv_usec;
}

int
main(int argc, char *argv[]) {
int doWrite = 1;
unsigned size = 80 * 1024 * 1024;

int letter;
while ((letter = getopt(argc, argv, "s:wrvh?" )) != -1) {
switch(letter) {
case 's': size = atoi(optarg) * 1024 * 1024; break;
case 'w': doWrite = 1; break;
default:
fprintf(stderr,"useup [-s mb][-w]\n");
exit(1);
}
}
int *base = (int*) mmap(0, size, 
  PROT_READ|PROT_WRITE, 
  MAP_ANONYMOUS|MAP_PRIVATE, 0, 0);
if (base == MAP_FAILED) {
perror("mmap failed");
exit(1);
}

int *end = base + size/4;

while (1) {
double start = second();
if (doWrite)
for (int *p = base; p < end; p += 1024)
*p = 0;
else {
unsigned sum = 0;
for (int *p = base; p < end; p += 1024)
sum += *p;
sink = sum;
}
printf("%f\n",1000*(second() - start));
}
}


#include 
#include 
#include 
#include 

int
main(int argc, char *argv[]) {
int uid = getuid();
int pid = atoi(argv[1]);
int sched_fifo_min, sched_fifo_max;
static struct sched_param sched_parms;

if (!pid)
pid = getpid();

sched_fifo_min = sched_get_priority_min(SCHED_FIFO);
sched_fifo_max = sched_get_priority_max(SCHED_FIFO);
sched_parms.sched_priority = sched_fifo_min + 1;

if (sched_setscheduler(pid, SCHED_FIFO, _parms) == -1)
perror("cannot set realtime scheduling policy");

if (uid)
setuid(uid);

if (pid == getpid())
execvp(argv[1],[1]);
return 0;
}



Re: #define HZ 1024 -- negative effects?

2001-04-25 Thread Nigel Gamble

On Tue, 24 Apr 2001, Michael Rothwell wrote:
> Are there any negative effects of editing include/asm/param.h to change 
> HZ from 100 to 1024? Or any other number? This has been suggested as a 
> way to improve the responsiveness of the GUI on a Linux system. Does it 
> throw off anything else, like serial port timing, etc.?

Why not just run the X server at a realtime priority?  Then it will get
to respond to existing events, such as keyboard and mouse input,
promptly without creating lots of superfluous extra clock interrupts.
I think you will find this is a better solution.

Nigel Gamble[EMAIL PROTECTED]
Mountain View, CA, USA. http://www.nrg.org/

MontaVista Software [EMAIL PROTECTED]

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects

2001-04-25 Thread Mark Hahn

  Are there any negative effects of editing include/asm/param.h to change 
  HZ from 100 to 1024? Or any other number? This has been suggested as a 
  way to improve the responsiveness of the GUI on a Linux system. Does it 
...
 Why not just run the X server at a realtime priority?  Then it will get
 to respond to existing events, such as keyboard and mouse input,
 promptly without creating lots of superfluous extra clock interrupts.
 I think you will find this is a better solution.

it's surprisingly ineffective; usually, if someone thinks responsiveness
is bad, there's a problem with the system.  for instance, if the system
is swapping, setting X (and wm, and clients) to RT makes little difference,
since the kernel is stealing pages from them, regardless of their scheduling
priority.

if you're curious, you might be interested in two toy programs
I've attached.  one is setrealtime, which will make a pid RT, or else act
as a wrapper (ala /bin/time).  I have it installed suid root on my system,
though this is rather dangerous if your have lusers around.  the second is a
simple memory-hog: mmaps a bunch of ram, and keeps it active (printing out a
handy measure of how long it took to touch its pages...)

regards, mark hahn.


#include unistd.h
#include stdlib.h
#include stdio.h
#include sys/time.h
#include sys/mman.h

volatile unsigned sink;

double second() {
struct timeval tv;
gettimeofday(tv,0);
return tv.tv_sec + 1e-6 * tv.tv_usec;
}

int
main(int argc, char *argv[]) {
int doWrite = 1;
unsigned size = 80 * 1024 * 1024;

int letter;
while ((letter = getopt(argc, argv, s:wrvh? )) != -1) {
switch(letter) {
case 's': size = atoi(optarg) * 1024 * 1024; break;
case 'w': doWrite = 1; break;
default:
fprintf(stderr,useup [-s mb][-w]\n);
exit(1);
}
}
int *base = (int*) mmap(0, size, 
  PROT_READ|PROT_WRITE, 
  MAP_ANONYMOUS|MAP_PRIVATE, 0, 0);
if (base == MAP_FAILED) {
perror(mmap failed);
exit(1);
}

int *end = base + size/4;

while (1) {
double start = second();
if (doWrite)
for (int *p = base; p  end; p += 1024)
*p = 0;
else {
unsigned sum = 0;
for (int *p = base; p  end; p += 1024)
sum += *p;
sink = sum;
}
printf(%f\n,1000*(second() - start));
}
}


#include unistd.h
#include stdlib.h
#include stdio.h
#include sched.h

int
main(int argc, char *argv[]) {
int uid = getuid();
int pid = atoi(argv[1]);
int sched_fifo_min, sched_fifo_max;
static struct sched_param sched_parms;

if (!pid)
pid = getpid();

sched_fifo_min = sched_get_priority_min(SCHED_FIFO);
sched_fifo_max = sched_get_priority_max(SCHED_FIFO);
sched_parms.sched_priority = sched_fifo_min + 1;

if (sched_setscheduler(pid, SCHED_FIFO, sched_parms) == -1)
perror(cannot set realtime scheduling policy);

if (uid)
setuid(uid);

if (pid == getpid())
execvp(argv[1],argv[1]);
return 0;
}



Re: #define HZ 1024 -- negative effects

2001-04-25 Thread Michael Rothwell

Well, for kicks, I tried setting HZ to 1024 with 2.2.19. It seemed a 
little more responsive, but that could be psychosomatic. :) I did notice 
that I was unable to sync my palm pilot until I set it back to 100. 
YMMV. The most useful performace tweak for a GUI that I've come across is:

#define _SHM_ID_BITS10

... if y ou're running Gnome and/or Gtk, because of its appetite for 
lots of SHM segments.

-Michael

Mark Hahn wrote:

 Are there any negative effects of editing include/asm/param.h to change 
 HZ from 100 to 1024? Or any other number? This has been suggested as a 
 way to improve the responsiveness of the GUI on a Linux system. Does it 
 
 ...
 
 Why not just run the X server at a realtime priority?  Then it will get
 to respond to existing events, such as keyboard and mouse input,
 promptly without creating lots of superfluous extra clock interrupts.
 I think you will find this is a better solution.
 
 
 it's surprisingly ineffective; usually, if someone thinks responsiveness
 is bad, there's a problem with the system.  for instance, if the system
 is swapping, setting X (and wm, and clients) to RT makes little difference,
 since the kernel is stealing pages from them, regardless of their scheduling
 priority.
 
 if you're curious, you might be interested in two toy programs
 I've attached.  one is setrealtime, which will make a pid RT, or else act
 as a wrapper (ala /bin/time).  I have it installed suid root on my system,
 though this is rather dangerous if your have lusers around.  the second is a
 simple memory-hog: mmaps a bunch of ram, and keeps it active (printing out a
 handy measure of how long it took to touch its pages...)
 
 regards, mark hahn.
 
 
 
 
 #include unistd.h
 #include stdlib.h
 #include stdio.h
 #include sys/time.h
 #include sys/mman.h
 
 volatile unsigned sink;
 
 double second() {
 struct timeval tv;
 gettimeofday(tv,0);
 return tv.tv_sec + 1e-6 * tv.tv_usec;
 }
 
 int
 main(int argc, char *argv[]) {
 int doWrite = 1;
 unsigned size = 80 * 1024 * 1024;
 
 int letter;
 while ((letter = getopt(argc, argv, s:wrvh? )) != -1) {
   switch(letter) {
   case 's': size = atoi(optarg) * 1024 * 1024; break;
   case 'w': doWrite = 1; break;
   default:
   fprintf(stderr,useup [-s mb][-w]\n);
   exit(1);
   }
 }
 int *base = (int*) mmap(0, size, 
 PROT_READ|PROT_WRITE, 
 MAP_ANONYMOUS|MAP_PRIVATE, 0, 0);
 if (base == MAP_FAILED) {
   perror(mmap failed);
   exit(1);
 }
 
 int *end = base + size/4;
 
 while (1) {
   double start = second();
   if (doWrite)
   for (int *p = base; p  end; p += 1024)
   *p = 0;
   else {
   unsigned sum = 0;
   for (int *p = base; p  end; p += 1024)
   sum += *p;
   sink = sum;
   }
   printf(%f\n,1000*(second() - start));
 }
 }
 
 
 
 
 #include unistd.h
 #include stdlib.h
 #include stdio.h
 #include sched.h
 
 int
 main(int argc, char *argv[]) {
 int uid = getuid();
 int pid = atoi(argv[1]);
 int sched_fifo_min, sched_fifo_max;
 static struct sched_param sched_parms;
 
 if (!pid)
   pid = getpid();
 
 sched_fifo_min = sched_get_priority_min(SCHED_FIFO);
 sched_fifo_max = sched_get_priority_max(SCHED_FIFO);
 sched_parms.sched_priority = sched_fifo_min + 1;
 
 if (sched_setscheduler(pid, SCHED_FIFO, sched_parms) == -1)
 perror(cannot set realtime scheduling policy);
 
 if (uid)
   setuid(uid);
 
 if (pid == getpid())
   execvp(argv[1],argv[1]);
 return 0;
 }
 useup.c
 
 Content-Description:
 
 useup.c
 Content-Type:
 
 TEXT/PLAIN
 Content-Encoding:
 
 BASE64
 
 
 
 setrealtime.c
 
 Content-Description:
 
 setrealtime.c
 Content-Type:
 
 TEXT/PLAIN
 Content-Encoding:
 
 BASE64
 
 


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-25 Thread Dan Maas

 Are there any negative effects of editing include/asm/param.h to change
 HZ from 100 to 1024? Or any other number? This has been suggested as a
 way to improve the responsiveness of the GUI on a Linux system.

I have also played around with HZ=1024 and wondered how it affects
interactivity. I don't quite understand why it could help - one thing I've
learned looking at kernel traces (LTT) is that interactive processes very,
very rarely eat up their whole timeslice (even hogs like X). So more
frequent timer interrupts shouldn't have much of an effect...

If you are burning CPU doing stuff like long compiles, then the increased HZ
might make the system appear more responsive because the CPU hog gets
pre-empted more often. However, you could get the same result just by
running the task 'nice'ly...

The only other possibility I can think of is a scheduler anomaly. A thread
arose on this list recently about strange scheduling behavior of processes
using local IPC - even though one process had readable data pending, the
kernel would still go idle until the next timer interrupt. If this is the
case, then HZ=1024 would kick the system back into action more quickly...

Of course, the appearance of better interactivity could just be a placebo
effect. Double-blind trials, anyone? =)

Regards,
Dan

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



Re: #define HZ 1024 -- negative effects?

2001-04-25 Thread Mike Galbraith

On Wed, 25 Apr 2001, Dan Maas wrote:

 The only other possibility I can think of is a scheduler anomaly. A thread
 arose on this list recently about strange scheduling behavior of processes
 using local IPC - even though one process had readable data pending, the
 kernel would still go idle until the next timer interrupt. If this is the
 case, then HZ=1024 would kick the system back into action more quickly...

Hmm.  I've caught tasks looping here (experimental tree but..) with
interrupts enabled, but schedule never being called despite having
many runnable tasks.

-Mike

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



#define HZ 1024 -- negative effects?

2001-04-24 Thread Michael Rothwell

Are there any negative effects of editing include/asm/param.h to change 
HZ from 100 to 1024? Or any other number? This has been suggested as a 
way to improve the responsiveness of the GUI on a Linux system. Does it 
throw off anything else, like serial port timing, etc.?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/