First of all thanks for your prompt replies

Alan DeKok wrote:
Apostolos Pantsiopoulos wrote:
I did that. Actually it was the first thing I did. I got the same result.
  Also, the server does a LOT more than just running Perl.  You are
measuring the time taken to run your Perl scripts.  The time taken to
process a request can be VERY different.
I just benchmarked the "internal" script just to see if the DB is the
bottleneck. It is not.

  That's not what I meant.

  The server has "RADIUS" work to do, on top of running your Perl code.
 That means that there's less time to run your Perl code, because the
server is busy managing the "RADIUS" side of things for you.
I see.

EVERY query did not take more than 0.03 secs ( thrice the size of the
mean time)

  i.e. if you don't run RADIUS, you don't see any overhead from RADIUS.

Yes, I agree that they are competing for resources (and in this case the
DB is the only resource, really).
But when my server gets choked up shouldn't  we expect to see big
response times during the benchmark
of the perl module?

  You are stuck on the idea that your Perl module is all that the server
is doing.  It's not.
Well, yes that has been my main concern I must admit... because I have seen so many replies in the mailing list "urging" people to make the backend DB faster (and concentrating on that aspect alone when the server performs poorly).
I have all the other modules turned off.
(e.g. running the same queries from an outside
program I can get about 200 queries/sec from the DB

  ... and that program does NOTHING other than run DB queries.  So?\
Point.
, when my radiusd reaches the 50 r/s limit the DB idles at 10-24 q/s, so
the DB does not seem to be the problem)

  Find out what else is stopping the server from processing requests.
Is there ANYTHING you have configured other than your Perl script?  If
so, that may be the issue.
I''ll re-check it.
  Alan DeKok.

Thanks again
-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html

Reply via email to