Kris Kennaway wrote:
Increasing sample resolution just increases the CPU usage of top itself
but gives no more information about the real culprits :-(
Gunther Mayer wrote:
I don't see why my javavm, apache, postgres and/or radiusd would
spawn such short lived processes. Come to think of it, I know radius
might be doing just that, but how the heck would I go about finding
out? top -H brings me no closer...
Either increase the sample resolution or rule out other programs as
But I managed to solve my problem by reverting to and upgrading my SMP
kernel. The system's back now with 15 minute load averages around 0.05
which is where it should be.
What really remains a puzzle is what caused the bottleneck on my system
running the GENERIC single CPU system. I actually went back to my daily
cron emails in which it states that load average at 5am increased from
around 0 to 1.0 somewhere between day 3 and day 4 after rebooting into
freebsd-update's erroneously downloaded GENERIC kernel. It stayed around
1.0 until this morning when I reverted and rebooted.
On even closer inspection, before doing freebsd-update I had an uptime
of 202 days on that box. Even then there are some days when load average
was 1.0 at 5am, I think the real culprit is my radiusd which probably
sometimes spawns threads that chow 100% CPU until killed somehow. Having
only one CPU available would of course exacerbate such a problem.
Seeing that performance is back 100% I think I will wait for 7.0 to
supercharge my system's threads.
Anyways, thanks for the help Kris.
I see that my java is using no less than 26 threads, thread usage not
showing up might well be the problem.
Interesting point you make about libthr. I had no idea about the
different threading options available on 6.x and did some reading up
on it but there's very little official documentation or
recommendations about it. Upon investigation it turns out that all my
core daemons (httpd, postgres, radiusd and java) are linked against
/lib/libpthread.so.2 which afaict after limited reading is what you
refer to as libkse. Is that correct?
So I'm tempted to use libmap.conf to switch to libthr for all
instances of libpthread, though I'm put off by some very recent
reports (http://roy.marples.name/node/332) that this can cause some
nasty problems. Do you think that's cause for concern?
That bug report makes no sense to me. If they are using libmap
correctly then *all* libpthread references are remapped and if there
are missing symbols they will cause failure from the dynamic linker
when the process is first executed, not random crashes during operation.
Would you mind sharing your libmap.conf and/or symlink setup with the
list as well as your experiences with libthr?
Then again, as a first step I should really get my SMP kernel back as
a first step...
That will surely help.
firstname.lastname@example.org mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"