On Fri, 2003-02-07 at 00:30, Erich Titl wrote: > Hi Guys > > At 10:01 06.02.2003 -0600, you wrote: > >On Wednesday 05 February 2003 03:36 pm, Charles Steinkuehler wrote: > >.... > > > >Duly noted. Where does the responsibility of the 'check' and 'restart' lie? > >This would seem to be the respnsibility of the back-end (save-script), on > >first glance. Would a "trigger" be a good method? Would a seperate db > >need to be made to keep track of "triggers" and the needed information? > >I'm assuming a dynamic db based on a list of used variables contained > >in the package itself. Thoughts? > > Wouldn't some kind of 'dirty' flag on a package be sufficient? Once you > finish the configuration all packages with that flag could be restarted. > > >..... > >With some playing, I'm starting to agree with this to a point. A tree that > >contained '/path/variable/value' would be simple to set and change. > >However pairing the variable/value might not be too fun to use in > >the files that use this infomation. Considering this, if there was an > >added single db file that would concentrate the tree db information, > >there would be an option to easily find 'where' to change a value > >(by hand) and location to source for the run-time packages themselves. > > For a single key=value file there is one already in Bering and I believe in > Dachstein too, lrp.conf , why not use it? It only contains parameters we > need to use anyway. > If a hierarchical setup is chosen the directory idea is simple and easy to > parse, but more difficult for Joe Average to read and find things in. Just > refer to the /proc directory structure.
The api makes the hierarchy "feel" like name=value pairs. The output from the api _is_ name value pairs; the input is sequential arguments that mimic name=value pairs. See the following transcript from my CVS code: [ccarr@ginger leaf-cdb]$ ./leaf-cdb get tree interfs default='eth0' eth0_ipaddr='172.24.8.24' eth0_netmask='255.255.252.0' eth0_gateway='172.24.8.1' eth1_ipaddr='192.168.2.10' eth1_netmask='255.255.255.0' eth1_gateway='192.168.2.1' [ccarr@ginger leaf-cdb]$ ./leaf-cdb get interfs/eth0/ipaddr interfs_eth0_ipaddr='172.24.8.24' [ccarr@ginger leaf-cdb]$ ./leaf-cdb get tree interfs/eth0 gateway='172.24.8.1' ipaddr='172.24.8.24' netmask='255.255.252.0' [ccarr@ginger leaf-cdb]$ ./leaf-cdb set tree interfs/eth0 broadcast 172.24.11.255 network 172.24.8.0 netbits 22 [ccarr@ginger leaf-cdb]$ ./leaf-cdb get tree interfs/eth0 broadcast='172.24.11.255' gateway='172.24.8.1' ipaddr='172.24.8.24' netbits='22' netmask='255.255.252.0' network='172.24.8.0' A properly designed api makes it easy to get and set "name=value pairs" without actually having them in the same file. The strength in this approach is that an individual package "owns" the data in the files that map to its name=value pairs, and can be responsible for backup them up, then putting the file back into place on the next boot. The file is "assembled" rather than edited. > Another thought, maybe too late, I was wondering how a package like > shorewall could take its parameters from a pure key=value db. The keys > would have to be very elaborate then and such a set up might be more > complicated to understand than the current one. Tom probably had good > reasons to split his configuration files. The interaction between the config-db and the trigger/templating system is a pure abstraction. The name=value pairs on the output of the api become the input (and symbol table) for the template. With a properly designed templating system _any_ type or number of files can be generated, data can be morphed or manipulated after it leaves the config-db (i.e. netmask could be transformed to netbits or vice-versa). The work flow is: - change the config-db using the api - fire a predefined (and documented) trigger saying you have done so Then the trigger "handlers" take over. Any package that is interested in the occurance of a certain trigger installs a handler (drops a script into the corresponding directory) to do what he needs to have done when that trigger is fired. The trigger mechanism is likely just debian run-parts or the like; very simple. The general workflow for a trigger handler: - read the config-db or portion of the config-db that he is interested in using the api - use that as a symbol table for any number of templating operations that he needs to execute to transform his operating files - atomically attempt to apply the changes, restart his service or daemon or otherwise put the new values into action - check to see if it worked - roll back to the old values if it didn't (this will take some additional work) Your example of shorewall is very apropos. Shorewall will need to know about nearly every networking change on the box, as well as changes to it's own configuration. If a new interface is added, it must regenerate its interfaces file and restart. If someone adds a rule, it must regenerate its rules or policy files. It is up to the package itself (or the folks who are trying to integrate it with the web interface, I suppose) to interpret the event and make the necessary changes to the configuration files. The reason this must be is that if a web interface knows the results of its actions, it must necessarily be bound to the implementation, _inextricably_. Then if someone comes along and creates a fancy new way to configure interfaces at boot time or a new, even tinier dhcp server, the web interface would have to know about it to get it configured. This is bad. I think. If there is a well designed api and strong templating mechanism combined with a well-documented trigger interface, the new package can simply drop in a trigger handler that reads the same values that the old package did, generate its new configuration files and do whatever else needs to be done. This adds strength to the LEAF framework; a base from which to proceed and a consistency for package maintainers to build upon. The key is to build the complexity into the core components so that packages and features laid on top become ever more trivial to implement. If making a new package for Bering means writing a whole new subsection to a web interface (which some folks don't know how to do) they will be less likely to approach the task. If they just have to write a shell script against a simple and well-documented framework (and the web interface will just _work_), the boundary is lowered to playing the game correctly. -- ----------------------------------------------------------------------- Chad Carr [EMAIL PROTECTED] ----------------------------------------------------------------------- ------------------------------------------------------- This SF.NET email is sponsored by: SourceForge Enterprise Edition + IBM + LinuxWorld = Something 2 See! http://www.vasoftware.com _______________________________________________ leaf-devel mailing list [EMAIL PROTECTED] https://lists.sourceforge.net/lists/listinfo/leaf-devel