Hi,

On 15.04.2009, at 20:31, Alan DeKok wrote:

Borislav Dimitrov wrote:
Anyways my main trouble is being unable to use multiple rlm_perl
instances like this (I've put the comments to illustrate the flexibility
of using *_clones which is now gone):

 Ah... OK.  That was *not* the intent of the change.  I'll take a look
at fixing it for the next release.

Cool! This is very comforting. If removing the separate "thread" pool for the perl interpreters is done with the intend to sacrifice some flexibility in order to simplify, refactor and make the code more efficient (mainly the memory footprint) it's fine. And after all with the new behavior it will be possible to achieve parallelism with the radiusd's thread pool, instead of using *_clones. This is fine and I'm sure that it'll become one of the wonderful enhancements I've seen through the years (I've been a happy user for about 4 years now. Fantastic product btw!). But my problems is being unable to instantiate two or more instances with the well known syntax:
module_name instance_name {

}
... that is (in my /rlm_perl's/ case):
perl inst1 {

}
perl inst2 {

}
... and then - in say accounting - I fork the processing with a switch on the NAS-IP routing it to a given instance:
accounting { # if cond inst1 else inst2 ...} # for example

This is what I'm unable to do with the new 2.1.4 and it is a huge concern for me and our company, leaving us with the only option to downgrade since right now we don't have the time to modify our or rlm_perl's code. I tried several different ways to achieve the above like: using the instantiate {} section, putting files named inst1 and inst2 in the modules directory with the rlm_perl configurations in each one, putting it all together in radiusd.conf etc... and it didn't work. Please let me know if I'm doing something wrong but if it is a "bug", I'm starting to think that many users will be happy (people do different setups using this technique like using different DB handles in the instances to achieve load-balancing etc).

Let the source be with you all and happy hacking ;-)

-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html

Reply via email to