On Mon, 2010-01-18 at 12:09 -0500, Marden P. Marshall wrote:
> I have been looking into ways in which to improve the dissemination of
> application configuration data. Specifically to make the
> configuration data available to the application in realtime and in a
> granular form, as opposed to the current bulk delivery mechanism, i.e.
> configuration files. There were attempts in the past to do this,
> utilizing proprietary provisioning protocols running over XML-RPC, but
> the results fell short of the intended goals. So now I am looking at
> a simpler and more direct approach, namely, to have the applications
> access the configuration data directly, via the configuration
> database.
>
> In order to achieve the objective of realtime and granular
> dissemination, the database would utilize the postgreSQL Listen /
> Notify mechanism, in conjunction with table modification triggers.
> The application would then subscribe, i.e. Listen, for relevant
> database tables modification notifications. In response to the
> received notifications, the application would then query the database
> directly for the updated configuration. No more waiting for
> configuration files to be created / replicated and no more application
> restarts.
>
> How the application accesses the database records would be based upon
> the specific needs of the application and the complexity of the
> associated configuration data, but would most likely be through simple
> SQL queries, O/R mapping middleware or some combination of both. For
> mission critical applications, a local configuration cache, in the
> form of a properties or XML file, could be maintained. This would
> allow the application to start up in the absence of database
> connectivity, an approach that is already employed by some
> applications.
>
> Any thoughts?
Actually, I don't think that realtime and fine-grained access to most
configuration data is a good goal. While on the surface it may seem so,
I think it actually creates some serious management and system
architecture challenges that are best avoided.
One of the core architectural principles that has guided most sipXecs
service development is that configuration data for services is both:
1. Distributed to the system on which the service runs so that the
service is to the greatest extent possible autonomous.
2. Stored on that system so that a service restart or even system
reboot does not require communication with the central
management database to resume service.
Having services access the SQL database directly has some bad
properties:
* It creates ambiguities or even inability to provide service when
access to the database is unavailable. This limits the ability
to distribute services on multiple systems, especially when
those systems are connected by less reliable WAN links.
* It creates potential performance bottlenecks. We've seen this
in the call-state/CDR system in the proxy (in which records are
written to a local database by each proxy, and those databases
are read by the central call resolver to create the CDRs); the
current high-load stress tests fail first in the CDR subsystem.
* It places a greater burden on the management software to
maintain an internally consistent and usable configuration in
the database at all times. By separating the database from the
configuration used by the services, we allow the management
software (and administrator) to directly control how and when
configuration data is propagated to the live services. A series
of changes can be made in the database which, when complete,
form a new and useful configuration without worrying about
whether or not any intermediate state consisting of just some of
those changes is disfunctional. Many reconfiguration operations
consist of multiple steps, and it's easy to create situations in
which step 1 will break things until step 2 is done; by
separating the distribution and activation of configuration data
from the storage in the database, we have an obvious direct way
to control when the service 'sees' a new configuration, and can
improve the odds that it is internally consistent.
Which is not to say that we shouldn't do a better job with activation
and reconfiguration to make services more dynamic and responsive - we
absolutely have a lot to improve in this regard.
At present, almost all configuration data is distributed/replicated
through mechanisms provided by sipXsupervisor (there are a few
exceptions, and a couple of those mechanisms need to be improved). We
leverage this in a couple of ways now - including that the
sipXsupervisor enforces that the software version number matches a
configuration version number provided by sipXconfig before starting
anything (preventing new software being started with old data or the
other way around).
I'd like to see us add an explicit indication from sipXconfig to
sipXsupervisor that passes through to a service to look for
configuration updates. This would allow each service to have one or
more configuration datasets (files, databases, whatever) that are
distributed to where the service runs, and then the service told to
reread them explicitly. The data could then be as fine or coarse
grained as is appropriate for the service, but updates happen only at
times controlled appropriately by either the management system or the
administrator.
In any event, a configuration change should rarely require a full
process restart. There will be times when the configuration change
itself is so disruptive and/or making the software implement the change
without a restart is so difficult that it won't be worth the trouble
(changing the port a service listens on comes to mind), but those should
clearly be much much less often than our current services require. If
we define a standard signal that all processes can use, then we can at
least use it for all new work, and incrementally retrofit it to existing
things (along with fixing them not to need a restart).
Woof wrote up a good proposal for such a standard supervisor/service
interface some time ago using a pipe left open between the processes
when the service is forked; I'm sure I can dig up a pointer...
_______________________________________________
sipx-dev mailing list [email protected]
List Archive: http://list.sipfoundry.org/archive/sipx-dev
Unsubscribe: http://list.sipfoundry.org/mailman/listinfo/sipx-dev
sipXecs IP PBX -- http://www.sipfoundry.org/