Re: (RADIATOR) Input queue size
Cheers, and thanks for the responses. I decided to compound your replies into one and reply in a single message, hope that's ok with you. Comments below. -GSH - Original Message - From: Claudio Lapidus [EMAIL PROTECTED] To: Guðbjörn S. Hreinsson [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Thursday, November 13, 2003 2:18 AM Subject: Re: (RADIATOR) Input queue size Hello Guðbjörn this may be unrelated, but I am interested to any and all tuning listmembers have done in the OS for Radiator performance. We are running two radiator servers with one proxy radiator in front and a seperate sql machine and ldap machine. From CL: Fine, but what OS do you use? It might be interesting to have a hardware summary too. ... From HI: Yes its useful to know the hardware/software platform and the various versions of Perl, etc. There are two radius servers, these are HP lp1000r with 2 Intel pentium III 1 GHz processors with 256 KB cache (coppermine), 1 GB of memory and 1 36 GB scsi disk. These are running RedHat AS 2.1, Radiator 3.7.1 and perl 5.6.1. We also have a mySQL db running on a seperate server, which is a DELL 1650 with 2 Intel Xeon 2.6 GHz with 512 KB cache and 2 GB of memory. Same Linux version. There is a seperate Radius proxy machine on a seperate server which has the same hw/os as the mySQL server, it has the same Radiator and perl version as the Radius application (?) servers. There is a seperate ldap server, running on HP L2000/2 with two 500 MHz PA RISC 8500 cpus, 1 GB of memory and 4 36 GB disks. There is also a similar machine as a secondary containing a replica of the db. These guys are rarely busy, cpu, memory or disk based. Even during peak loads when a router has been reloaded the systems are not that loaded. This radius setup serves xDSL, PSTN/ISDN, Cable and hotSpot NAS's. [snip] Lengthening the udp queues seems to really have adverse effects on this situation. We have not really tried shortening the queue which might really have even more adverse effects, without testing though I can't tell. From CL: I can imagine that lengthening the queue only adds to the effect of the server processing old packets, i.e. packets whose original timer (at the NAS) has already expired. The root problem is the mismatch between the speed of the NAS sending packets and the server processing them. Probably is worth trying to increase the timeout setting at the NAS, at least to diminish retransmissions (but beware of total authentication time then). A quicker failover to a less loaded secondary might help too. Well, this seems only to happen at server reloads or similar problem times. You could throw more hw at this setup that's one way, we would also like to investigate the possibilities of improving this setup. We think that if the requests were more parallel (not threaded or multithreaded) this setup could process a lot more requests. This applies to both SQL and LDAP requests. We also think the choice of udp packets only for radius was not wise. For high loads (many packets) tcp is a lot better suited protocol. Then you also have timing and can throw away old packets immediately instead of processing and returning them and the NAS throwing them away... But that's nothing we can solve here. Timeouts are 5 seconds at the the nases, it retries 3 times, waiting 5 seconds each time. We don't think changing this (much) is wise. ... From HI: Claudio is correct, the usual cause of problems of this sort is the backend delay associated with querying the LDAP and/or SQL database. It is very helpful to look at a trace 4 debug with LogMicroseconds turned on (requires Time-HiRes from CPAN). This will show exactly how much time is being spent waiting for the queries to complete. On the average, ldap requests take 35 ms, sql requests take 13 ms, proxy delay is about 12 ms, altogether it takes about 120 ms on average to process a request (from nas send to reply). We've tuned this quite a bit and this seems to be as good as it's gonna get. And you are correct in your observation that increasing the queue size can adversely affect performance due to the increased number of retry requests that build up in the queue. To counter this we have configured multiple instances of radiators for authenticationauthorization and accounting and instances for seperate NAS's or NAS groups. This in effect simulates having a threaded radiator to reduce the effect of this sequential processing. From CL: OK, but are you sure that the bottleneck is in at the Radiator level or might it be at the LDAP server? In the latter case it probably won't be of much help anyway. LDAP requests take a fixed time (35 ms). It's mostly due to the filter we have, if we take away everything but the uid we would probably get 5 ms or less. The ldap server also does not really care how many requests you throw at it, all requests get replied to pretty
Re: (RADIATOR) Input queue size
Can all these Radiator instances use the same logfiles? Or they'll have problems racing for file locks? Vangelis Frank Danielson wrote: It's really not that hard. You run a number of Radiator instances, with each one having it's own connection to the LDAP, SQL, or whatever backend. Then you front end those with an instance or two of Radiator running AuthBy ROUNDROBIN or AuthBy LOADBALANCE to distribute the requests among them. You can process quite a lot of requests simultaneously this way. If your current server is not responding fast enough but the CPU utilization is not maxed out you are probably just hitting the ceiling on how many requests a single instance can process at a time. Start up some more processes on the box and use all those processor cycles that you paid for. -Frank -Original Message- From: Claudio Lapidus [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 12, 2003 9:19 PM To: Guðbjörn S. Hreinsson; [EMAIL PROTECTED] Subject: Re: (RADIATOR) Input queue size .. From my own corner, I wish it were possible to have more than one established connection with the SQL backend, so as to paralellize requests to a certain degree. But yes, I suppose that means multithreading, and AFAIK that's not possible under perl 5.6 nor 5.8 I think. Perhaps Perl 6 would do it? === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message.
RE: (RADIATOR) Input queue size
All radiator log accesses e done lock-write-unlock, IIRC, so you should be fine. I'd suggest double-checking with either the docs, the source, or Hugh before putting it on a production system :) -Original Message- From: Vangelis Kyriakakis [mailto:[EMAIL PROTECTED] Sent: 13 November 2003 08:59 Cc: [EMAIL PROTECTED] Subject: Re: (RADIATOR) Input queue size Can all these Radiator instances use the same logfiles? Or they'll have problems racing for file locks? Vangelis Frank Danielson wrote: It's really not that hard. You run a number of Radiator instances, with each one having it's own connection to the LDAP, SQL, or whatever backend. Then you front end those with an instance or two of Radiator running AuthBy ROUNDROBIN or AuthBy LOADBALANCE to distribute the requests among them. You can process quite a lot of requests simultaneously this way. If your current server is not responding fast enough but the CPU utilization is not maxed out you are probably just hitting the ceiling on how many requests a single instance can process at a time. Start up some more processes on the box and use all those processor cycles that you paid for. -Frank -Original Message- From: Claudio Lapidus [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 12, 2003 9:19 PM To: Guðbjörn S. Hreinsson; [EMAIL PROTECTED] Subject: Re: (RADIATOR) Input queue size .. From my own corner, I wish it were possible to have more than one established connection with the SQL backend, so as to paralellize requests to a certain degree. But yes, I suppose that means multithreading, and AFAIK that's not possible under perl 5.6 nor 5.8 I think. Perhaps Perl 6 would do it? === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message.
Re: (RADIATOR) Input queue size
Hello Matthew - You are quite correct. Radiator does an open-write-close for every log event. In spite of this however I would not recommend having more than one Radiator instance writing to a log file as it becomes almost impossible to then understand what is going on. It is _much_ better to set up your logging using GlobalVar's passed in on the startup command line. This way you can have the same configuration file but have different instances logging to different files. regards Hugh On 14/11/2003, at 3:51 AM, Matthew Trout wrote: All radiator log accesses e done lock-write-unlock, IIRC, so you should be fine. I'd suggest double-checking with either the docs, the source, or Hugh before putting it on a production system :) -Original Message- From: Vangelis Kyriakakis [mailto:[EMAIL PROTECTED] Sent: 13 November 2003 08:59 Cc: [EMAIL PROTECTED] Subject: Re: (RADIATOR) Input queue size Can all these Radiator instances use the same logfiles? Or they'll have problems racing for file locks? Vangelis Frank Danielson wrote: It's really not that hard. You run a number of Radiator instances, with each one having it's own connection to the LDAP, SQL, or whatever backend. Then you front end those with an instance or two of Radiator running AuthBy ROUNDROBIN or AuthBy LOADBALANCE to distribute the requests among them. You can process quite a lot of requests simultaneously this way. If your current server is not responding fast enough but the CPU utilization is not maxed out you are probably just hitting the ceiling on how many requests a single instance can process at a time. Start up some more processes on the box and use all those processor cycles that you paid for. -Frank -Original Message- From: Claudio Lapidus [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 12, 2003 9:19 PM To: Guðbjörn S. Hreinsson; [EMAIL PROTECTED] Subject: Re: (RADIATOR) Input queue size .. From my own corner, I wish it were possible to have more than one established connection with the SQL backend, so as to paralellize requests to a certain degree. But yes, I suppose that means multithreading, and AFAIK that's not possible under perl 5.6 nor 5.8 I think. Perhaps Perl 6 would do it? === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. NB: have you included a copy of your configuration file (no secrets), together with a trace 4 debug showing what is happening? -- Radiator: the most portable, flexible and configurable RADIUS server anywhere. Available on *NIX, *BSD, Windows, MacOS X. - Nets: internetwork inventory and management - graphical, extensible, flexible with hardware, software, platform and database independence. - CATool: Private Certificate Authority for Unix and Unix-like systems. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message.
Re: (RADIATOR) Input queue size
Hello Vangelis - See my other mail on this topic, but I suggest you use different log files for different Radiator instances. regards Hugh On 13/11/2003, at 7:59 PM, Vangelis Kyriakakis wrote: Can all these Radiator instances use the same logfiles? Or they'll have problems racing for file locks? Vangelis Frank Danielson wrote: It's really not that hard. You run a number of Radiator instances, with each one having it's own connection to the LDAP, SQL, or whatever backend. Then you front end those with an instance or two of Radiator running AuthBy ROUNDROBIN or AuthBy LOADBALANCE to distribute the requests among them. You can process quite a lot of requests simultaneously this way. If your current server is not responding fast enough but the CPU utilization is not maxed out you are probably just hitting the ceiling on how many requests a single instance can process at a time. Start up some more processes on the box and use all those processor cycles that you paid for. -Frank -Original Message- From: Claudio Lapidus [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 12, 2003 9:19 PM To: Guðbjörn S. Hreinsson; [EMAIL PROTECTED] Subject: Re: (RADIATOR) Input queue size .. From my own corner, I wish it were possible to have more than one established connection with the SQL backend, so as to paralellize requests to a certain degree. But yes, I suppose that means multithreading, and AFAIK that's not possible under perl 5.6 nor 5.8 I think. Perhaps Perl 6 would do it? === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. NB: have you included a copy of your configuration file (no secrets), together with a trace 4 debug showing what is happening? -- Radiator: the most portable, flexible and configurable RADIUS server anywhere. Available on *NIX, *BSD, Windows, MacOS X. - Nets: internetwork inventory and management - graphical, extensible, flexible with hardware, software, platform and database independence. - CATool: Private Certificate Authority for Unix and Unix-like systems. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message.
Re: (RADIATOR) Input queue size
Cheers, this may be unrelated, but I am interested to any and all tuning listmembers have done in the OS for Radiator performance. We are running two radiator servers with one proxy radiator in front and a seperate sql machine and ldap machine. Since we perform ldap auth the incoming requests are sequential which limits our rate of authentication. We have seen that we can handle at most about 1500 requests per minute per server during peak loads (server restarts etc.) This is mostly load from xDSL users (we do periodic tarpitting for bad users). We have also seen that at these peaks udp packets begin to be dropped (by the os I imagine) and aaa rates start to get worse. This drop in rates seems to related to the fact that if the radius servers do not respond in a timely fashion the NAS's begin to resend the radius requests adding to the incoming rate of packets, increasing the udp drop etc. We actually monitor udp packet drops and restart the radiators which increases the rate for a while, until there is another udp queue buildup and udp packets start to be dropped, nas's start to resend packets etc.until the monitor script restarts the servers. Lengthening the udp queues seems to really have adverse effects on this situation. We have not really tried shortening the queue which might really have even more adverse effects, without testing though I can't tell. To counter this we have configured multiple instances of radiators for authenticationauthorization and accounting and instances for seperate NAS's or NAS groups. This in effect simulates having a threaded radiator to reduce the effect of this sequential processing. This has not seemed to be related to CPU load or network performance, we have looked at these in detail. We also looked at dropping radius packets which were x seconds old but there is no practical way to do this, since we really have no way of knowing when the NAS sent the udp packet (I wish radius supported tcp, it's much better situated for high traffic rates). We did an estimate once for how many packets would fit in the queue based on some average size but this did in the end have really no purpose. If anyone has input on this issue or OS tuning for Radiator I'd love to hear about it. Hope you understand my attempt to explain the above scenario. Basically we have a pretty stable environment today, but perhaps overly complex to manage because of the multiple instances. Hugh, is a threaded ldap handler on the horizon? Is this perl or radiator related? Rgds, -GSH - Original Message - From: Hugh Irvine [EMAIL PROTECTED] To: Claudio Lapidus [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Wednesday, November 12, 2003 3:02 AM Subject: Re: (RADIATOR) Input queue size Hello Claudio - This is really an operating system issue, as the UDP buffer space is managed by the OS. You should have a look at netstat and friends. Solaris may also have addtional tools that allow you to look at what the system is doing. regards Hugh On 12/11/2003, at 1:28 PM, Claudio Lapidus wrote: Hello Hugh, Is there a way to inspect the length of the input queue during runtime? I'm running Radiator 3.6 on Solaris 8, Perl 5.8.0, no monitor setup. thanks in advance cl. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. NB: have you included a copy of your configuration file (no secrets), together with a trace 4 debug showing what is happening? -- Radiator: the most portable, flexible and configurable RADIUS server anywhere. Available on *NIX, *BSD, Windows, MacOS X. - Nets: internetwork inventory and management - graphical, extensible, flexible with hardware, software, platform and database independence. - CATool: Private Certificate Authority for Unix and Unix-like systems. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message.
Re: (RADIATOR) Input queue size
Hello Guðbjörn this may be unrelated, but I am interested to any and all tuning listmembers have done in the OS for Radiator performance. We are running two radiator servers with one proxy radiator in front and a seperate sql machine and ldap machine. Fine, but what OS do you use? It might be interesting to have a hardware summary too. [snip] Lengthening the udp queues seems to really have adverse effects on this situation. We have not really tried shortening the queue which might really have even more adverse effects, without testing though I can't tell. I can imagine that lengthening the queue only adds to the effect of the server processing old packets, i.e. packets whose original timer (at the NAS) has already expired. The root problem is the mismatch between the speed of the NAS sending packets and the server processing them. Probably is worth trying to increase the timeout setting at the NAS, at least to diminish retransmissions (but beware of total authentication time then). A quicker failover to a less loaded secondary might help too. To counter this we have configured multiple instances of radiators for authenticationauthorization and accounting and instances for seperate NAS's or NAS groups. This in effect simulates having a threaded radiator to reduce the effect of this sequential processing. OK, but are you sure that the bottleneck is in at the Radiator level or might it be at the LDAP server? In the latter case it probably won't be of much help anyway. This has not seemed to be related to CPU load or network performance, we have looked at these in detail. No, it's probably more I/O bound, (disk, I mean). If anyone has input on this issue or OS tuning for Radiator I'd love to hear about it. Hope you understand my attempt to explain the above scenario. Basically we have a pretty stable environment today, but perhaps overly complex to manage because of the multiple instances. Back to my original question then, I'm struggling to measure the effective length of the input queue in Solaris. Linux's netstat shows it readily, and I remember Tru64 doing the same. But Solaris' netstat lacks this one, apparently. I'll have to continue my quest... Hugh, is a threaded ldap handler on the horizon? Is this perl or radiator related? From my own corner, I wish it were possible to have more than one established connection with the SQL backend, so as to paralellize requests to a certain degree. But yes, I suppose that means multithreading, and AFAIK that's not possible under perl 5.6 nor 5.8 I think. Perhaps Perl 6 would do it? === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message.
Re: (RADIATOR) Input queue size
Hello Claudio, Hello Guðbjörn - Comments below. On 13/11/2003, at 1:18 PM, Claudio Lapidus wrote: Hello Guðbjörn this may be unrelated, but I am interested to any and all tuning listmembers have done in the OS for Radiator performance. We are running two radiator servers with one proxy radiator in front and a seperate sql machine and ldap machine. Fine, but what OS do you use? It might be interesting to have a hardware summary too. Yes its useful to know the hardware/software platform and the various versions of Perl, etc. [snip] Lengthening the udp queues seems to really have adverse effects on this situation. We have not really tried shortening the queue which might really have even more adverse effects, without testing though I can't tell. I can imagine that lengthening the queue only adds to the effect of the server processing old packets, i.e. packets whose original timer (at the NAS) has already expired. The root problem is the mismatch between the speed of the NAS sending packets and the server processing them. Probably is worth trying to increase the timeout setting at the NAS, at least to diminish retransmissions (but beware of total authentication time then). A quicker failover to a less loaded secondary might help too. Claudio is correct, the usual cause of problems of this sort is the backend delay associated with querying the LDAP and/or SQL database. It is very helpful to look at a trace 4 debug with LogMicroseconds turned on (requires Time-HiRes from CPAN). This will show exactly how much time is being spent waiting for the queries to complete. And you are correct in your observation that increasing the queue size can adversely affect performance due to the increased number of retry requests that build up in the queue. To counter this we have configured multiple instances of radiators for authenticationauthorization and accounting and instances for seperate NAS's or NAS groups. This in effect simulates having a threaded radiator to reduce the effect of this sequential processing. OK, but are you sure that the bottleneck is in at the Radiator level or might it be at the LDAP server? In the latter case it probably won't be of much help anyway. Correct again. We have observed these problems too, when parallel requests can also slow things down. BTW - this is one of the strong arguments against a multi-threaded server, which may not help at all in some situations. In general it is easier in the first instance to do what you have done with multiple instances and a front end load balancer. Just out of interest the largest Radiator setup we are familiar with is using this architecture, with a load balancer feeding 6 Radiator hosts, each one with an authentication and an accounting instance. The backend is a *very* fast Oracle database server and the overall throughput has been tested to over 1200 radius requests per second. This has not seemed to be related to CPU load or network performance, we have looked at these in detail. No, it's probably more I/O bound, (disk, I mean). I would agree - again a trace 4 debug with LogMicroseconds will show us exactly what is happening. If anyone has input on this issue or OS tuning for Radiator I'd love to hear about it. Hope you understand my attempt to explain the above scenario. Basically we have a pretty stable environment today, but perhaps overly complex to manage because of the multiple instances. Back to my original question then, I'm struggling to measure the effective length of the input queue in Solaris. Linux's netstat shows it readily, and I remember Tru64 doing the same. But Solaris' netstat lacks this one, apparently. I'll have to continue my quest... On this topic, have you checked the Sunfreeware site to see if there are any useful tools in this regard? www.sunfreeware.com Hugh, is a threaded ldap handler on the horizon? Is this perl or radiator related? This topic comes up from time to time and the fundamental problem at the moment is that Perl itself does not currently have production quality threading support. This being the case, we have not pursued it actively. And note my previous comments about whether or not this would be a good thing in any case. From my own corner, I wish it were possible to have more than one established connection with the SQL backend, so as to paralellize requests to a certain degree. But yes, I suppose that means multithreading, and AFAIK that's not possible under perl 5.6 nor 5.8 I think. Perhaps Perl 6 would do it? As mentioned above, the easiest way to do this currently is with a load balancer (you could use the AuthBy ROUNDROBIN, VOLUMEBALANCE, LOADBALANCE modules) and multiple instances of Radiator. Note that in most cases, at least using one instance for authentication and another for accounting is a good first step. We will continue to monitor the Perl support for multi-threading too, of course. regards Hugh NB: have you included a copy of
RE: (RADIATOR) Input queue size
It's really not that hard. You run a number of Radiator instances, with each one having it's own connection to the LDAP, SQL, or whatever backend. Then you front end those with an instance or two of Radiator running AuthBy ROUNDROBIN or AuthBy LOADBALANCE to distribute the requests among them. You can process quite a lot of requests simultaneously this way. If your current server is not responding fast enough but the CPU utilization is not maxed out you are probably just hitting the ceiling on how many requests a single instance can process at a time. Start up some more processes on the box and use all those processor cycles that you paid for. -Frank -Original Message- From: Claudio Lapidus [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 12, 2003 9:19 PM To: Guðbjörn S. Hreinsson; [EMAIL PROTECTED] Subject: Re: (RADIATOR) Input queue size .. From my own corner, I wish it were possible to have more than one established connection with the SQL backend, so as to paralellize requests to a certain degree. But yes, I suppose that means multithreading, and AFAIK that's not possible under perl 5.6 nor 5.8 I think. Perhaps Perl 6 would do it? === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message.
Re: (RADIATOR) Input queue size
Hello Claudio - This is really an operating system issue, as the UDP buffer space is managed by the OS. You should have a look at netstat and friends. Solaris may also have addtional tools that allow you to look at what the system is doing. regards Hugh On 12/11/2003, at 1:28 PM, Claudio Lapidus wrote: Hello Hugh, Is there a way to inspect the length of the input queue during runtime? I'm running Radiator 3.6 on Solaris 8, Perl 5.8.0, no monitor setup. thanks in advance cl. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message. NB: have you included a copy of your configuration file (no secrets), together with a trace 4 debug showing what is happening? -- Radiator: the most portable, flexible and configurable RADIUS server anywhere. Available on *NIX, *BSD, Windows, MacOS X. - Nets: internetwork inventory and management - graphical, extensible, flexible with hardware, software, platform and database independence. - CATool: Private Certificate Authority for Unix and Unix-like systems. === Archive at http://www.open.com.au/archives/radiator/ Announcements on [EMAIL PROTECTED] To unsubscribe, email '[EMAIL PROTECTED]' with 'unsubscribe radiator' in the body of the message.