[EMAIL PROTECTED] wrote:

This is a a little complicated to answer. These tools are written (deliberately) to assume little to nothing about one other or the environment they run in. Where I have made assumptions, I have tried to document them. They are in some sense generic. Now, a real configuration chain at a site is not generic; it's like the difference between programs and programming languages: these are constructs that you can use to solve problems, not problem solutions on their own.

Hmm, so each organization would be responsible for building their own tool chain, huh? Seems you should at least have a best-practice tool chain, just to make it easy for people. In the long run, your approach is clearly superior; in the short term, the current generation of sysadmins can't handle that, unfortunately. That's why I've tried to build Puppet as a cohesive tool chain but keep it as uncoupled as possible.

That might seem a little far from the question, but you're essentially asking "where do I find templates in the profile, and what do they look like?" The simple answer is: wherever you put them (which isn't to be facetious). At a particular site, or with a particular tool, you might specify for example that all files to be made from templates are listed in the "files" map. I'll come back to that idea in a moment. You might also specify that they have to be listed in a map with their individual package. Something at the other end has to know where they are or what they look like, and invoke grplace with the necessary parameters.

Ah. This is problematic, IMO, because it makes it very difficult to share configurations -- you can only share with those who share the same template policy as you.

For the particular case where you have a files map, I've written a (14 line/very simplistic/omitted error checking/friendliness) shell script, which is given in the man page. It goes through every member of the "files" map and instantiates it using grplace. The files can go anywhere on the filesystem, have any owner or access mode, and use any template. Each entry in files has a name, an access mode, owner, group and template. grcfg is used to find out how many entries are there.

Each entry looks like this one for an example passwd file (nb in mcfg syntax since XML is horrible):

 files.passwd.name = "/etc/passwd";
 files.passwd.mode = "644";
 files.passwd.owner = "root";
 files.passwd.group = "root";
 files.passwd.template = "passwd.grp";

 files.passwd.content.ed.name = "ed";
 files.passwd.content.ed.home = "/home/ed";
 files.passwd.content.ed.shell = $default_shell;
 files.passwd.content.ed.uid = 500;
 files.passwd.content.ed.gid = 500;
 files.passwd.content.ed.gecos = "Edmund Smith";

Heh, that's redundant in Puppet -- it's Puppet's job to translate from attributes to a record:

  User { shell => "/bin/bash" } # set a default

  user { ed:
    home => "/home/ed",
    uid => 500,
    ...
  }
  ...

Frankly, I'd love to stop users from managing file contents entirely, but I know I can't use it until Puppet supports a lot more resource types and a lot better.

That is, each entry has the grplace parameters inside it, as well as a "content" map that contains all the values the template should use.

I can see how that would work and how it could be useful for lower-level tools that focus on managing file contents.

[SNIP]
I have never intended for it to be called manually; the basic use case was the one above. It was always my intention that it would be a part of a larger solution, where that larger solution would determine what conventions governed the locations and types of templates. It actually took quite a lot of work to make sure that grplace didn't make those decisions (it would've been much easier to make it pull its templates from a specific location, or just look for special "template" entries in the file, but grplace itself does nothing of the sort; you, it's invoker, must know what semantics you want from the profile, which in turn means it supports almost any semantics.)

I agree in the long term, but it's nice to have a standard plus the ability to deviate from that standard.

When yo usay web applications, what do you mean? jsp/php/asp? Sure, that's a similar problem, but one that isn't dealing with:

Ruby's ERb, perl's HTML::Template, and plenty more.

potentially hostile input (templates)

Puppet's templates are all stored server-side, so they're no more hostile than the configurations themselves.

potentially root authority
a particular data store with an awkward (for these purposes) format

They're used by a specialised audience, the price of mistakes is much lower and the volume of code can afford to be much greater. Here, you want to be confident that nothing awful can happen, and to see as easily as possible what it is going to do. Also, all three of the above fit into an existing markup language, whereas unix files are not a consistent markup language. (ie "angle brackets are special, amp is special, double quotes are special".. in a unix config file, nothing is necessarily special across all the differnet things you would like to do).

This is a common conception in the sysadmin world -- that somehow sysadmins operate in a more dangerous environment than developers. I don't understand it and I don't agree with it.

I agree with the markup syntax and escaping problems. I have not tried to assess ERb's suitability for specific syntaxes, but even then I'd still prefer a common templating system that could be used to support different syntaxes than a specialized one that would be new and different to everyone.

I also must say that your templating system doesn't, um, seem all that user-friendly. :)

The authorisation tool assumes that

(1) it is running on the same host that the profile was generated on (for now, although I'm already regretting this). (2) that the tool that generated the profile was trustworthy, although the files it generated it from might not have been.
(3) that your filesystem is not compromised.

It then looks at the derivation of each resource in the compiled profile, and looks up which files it came from. Any tool could have created the compiled profile, all cfgas is checking is whether, given reliable information about the files used to produce each value, the profile is allowed. It essentially copies "authorised" files from its input dir to its output dir. The reason for doing it this way is generality. Sure, I could bundle it with mcfg, but what difference does that make: you either trust mcfg or you don't (right now, you probably shouldn't totally, because the derivation information it gives isn't complete for lists).

Huh; ok.  That's an interesting way around having an RBAC database.

mcfg is always going to be a little clunky; I think of it (already) as the Pascal of this world: ugly, but theoretically clean, and quite nice if clarity and correctness are your goals. I should think I'll stick with the name. I am contemplating a "piece de resistance" for this chain; ie a serious constraint compiler, but I'm still evaluating my time commitments in general. I started off to prove that these things _could_be_done_ not to solve all the world's problems; I hoped to spur someone else into picking up these ideas and creating a production level variant. It is tempting to go for a resume-enhancing project effort to make this whole chain usable, but possibly misguided in the long run.

Heh; good luck with someone adopting it as their own. :) I can't even get people to help much with Puppet and I'm actively running it. Oh, and it's not written in C. :)

The DB then web server problem is one I have arrived at the same solution to twice now, using different techniques. There are several related problems, one of which is monitoring (how do I know it went down, how do i know its back up?) but the most important piece of information you need is what depends on what. You can do this via logic like cfgw: ie you can require that certain propositions are always true, and try to find a path between configurations that keeps them true. The other alternative that I explored with peer-to-peer reconfiguration networks is to simply list "must provide" entries and only allow reconfigurations or migrations in certain circumstances (e.g. my mail server can't go down until all its bound clients have migrated to the other one that's coming up)...

All quite theoretical, and probably some distance from a real implementation, but interesting to think about.

My plans involve an event system that supports inter-host dependencies and using events to react to changes in dependent objects, but I don't yet know how I'll handle performing work based on the state of a remote dependency. I expect that monitoring integration is the only reasonable way to do that, but I'm years out from that at this point.

--
Venter's First Law:
   Discoveries made in a field by some one from another discipline will
   always be upsetting to the majority of those inside.
---------------------------------------------------------------------
Luke Kanies | http://reductivelabs.com | http://madstop.com

_______________________________________________
lssconf-discuss mailing list
lssconf-discuss@inf.ed.ac.uk
http://lists.inf.ed.ac.uk/mailman/listinfo/lssconf-discuss

Reply via email to