notifying another server on accounting

2010-03-05 Thread Michael Fowler
Greetings,

We have a bit of an odd setup (apparently).  We have a vendor that is
providing services based on whether a user has an active and authorized
session.  In order to support this we forward on accounting data with a
detail file writer and reader, using the copy-acct-to-home-server as a
template.

This is using FreeRadius 2.1.8.

I have always felt lame ascii drawings help, so this is the setup (in
essence):

request:  NAS - accounting-server | copy | - vendor
response: NAS - accounting-server  - vendor

Unfortunately, we seem to be hitting a wall in terms of packets
transmitted to the vendor.  It is my understanding that the detail
reader is serial in nature, meaning it only sends one packet to the
vendor (in this case), and will not send another until it gets a
response.  The vendor is over a slow link, or the packets are otherwise
delayed, so we are getting a backlog of detail entries.  The detail
file is filling faster than it can be flushed to the vendor.

My question is, how can we fix this?

A few ideas have been batted around.  One is to write some code (via
rlm_perl or rlm_python) that essentially does what the entire
writer/reader combination is doing, only in parallel.  Meaning, it
handles transmitting and retransmitting to the vendor.  In the short
term this might be viable, but it's reinventing wheels, and it's hard to
justify long-term given most of the people dealing with this are not
programmers.

Another was to somehow load-balance the readers.  I cannot find a
configuration example to support this, but would it be possible, and
more importantly useful, to have multiple readers pointing to the same
detail file?

Any help or suggestions would be appreciated.  Thanks.

--
Michael Fowler
www.shoebox.net
-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html


Re: notifying another server on accounting

2010-03-05 Thread Alan DeKok
Michael Fowler wrote:
 Unfortunately, we seem to be hitting a wall in terms of packets
 transmitted to the vendor.  It is my understanding that the detail
 reader is serial in nature, meaning it only sends one packet to the
 vendor (in this case), and will not send another until it gets a
 response.  The vendor is over a slow link, or the packets are otherwise
 delayed, so we are getting a backlog of detail entries.  The detail
 file is filling faster than it can be flushed to the vendor.

  Yes, that is a bit of an issue.

 My question is, how can we fix this?

  Hack the code. :(

 A few ideas have been batted around.  One is to write some code (via
 rlm_perl or rlm_python) that essentially does what the entire
 writer/reader combination is doing, only in parallel.  Meaning, it
 handles transmitting and retransmitting to the vendor.  In the short
 term this might be viable, but it's reinventing wheels, and it's hard to
 justify long-term given most of the people dealing with this are not
 programmers.

  Too much work.

 Another was to somehow load-balance the readers.  I cannot find a
 configuration example to support this, but would it be possible, and
 more importantly useful, to have multiple readers pointing to the same
 detail file?

  Fix the reader to handle more than one packet.

  The issue right now is that it only tracks where it is in the detail
file in memory.  It *could* have an auxiliary file giving:

- packet offset
  - packet data (received response, last sent data, etc.)

   This should be tracked automatically, and cleaned up when the detail
file is deleted.

  There are a number of corner cases to deal with (files getting out of
sync, etc.), but it's possible.

  Alan DeKok.
-
List info/subscribe/unsubscribe? See http://www.freeradius.org/list/users.html