[EMAIL PROTECTED] wrote:
Quoting Luke Kanies <[EMAIL PROTECTED]>:
Ok, so I've looked through the grplace stuff a little bit. My first
question is, how do I associate a template with a specific file? That
is, say I have 50 or so templates, and 25 or so destinations, with
each destination having two templates that could be used for it; how
do I specify which template to use for a given destination (e.g., on
FreeBSD use template1, on Solaris use template2)?
You either specify on the command line which template to use, or you
specify it in a resource, and pass that on the command line. A simple
example might be like this:
(somewhere in the spec)
files.passwd.template = "whatever.grp";
(somewhere in the end tool)
grplace -t k files -k passwd -k template [ etc etc ]
Of course, the final value will be the one that appears in the profile,
so you could specify multiple different templates and override them
depending on specific needs.
Hmm. So, if I wanted to automatically replace every template with its
interplation, then I would essentially have to post-process the
configuration, looking for any 'template' attributes and then call
grplace for each one?
What would I then do with the result? Stick it in another attribute to
be sent to the client? I suppose it would depend on the resource type;
'files' would have the 'content' attribute set, but then I couldn't set
arbitrary attributes using templates, since my template post-processor
would have to have a hard-map between resource types and attributes
(e.g., files have their 'content' attribute set with template results).
Can you see a way to involve grplace without ever having to call it
manually? That is, have it be an automatic post-processor, or whatever
it takes?
[SNIP]
grplace is also a "proper" language in some sense; it has more in common
with standard ml, i would argue, than it does with m4. The closest rival
was php, and I seriously considered just implementing a php extension,
but in the end php is _much_ too powerful to be a safe tool to do bulk
work in. grplace manages its privileges quite carefully, and is/will be
with more testing safe run setuid if needed. This means it can be used
in cases in which you couldn't allow a normal templater to run. I'm
still considering a php extension (or perl/python/ruby bindings) but I
haven't reached any firm decision.
I still don't really get it. "Normal" templating systems work fine for
web applications and tons of other uses; what is it about the systems
world that disqualifies them?
It doesn't seem like a complicated problem to me, so I must be missing
something.
I've tried to be careful to keep all the bindings loose. This all came
from reading about deployment engines suggesting they could consume the
lisa spec. I didn't intend to write as much as I have, but I've been
enjoying hacking, so I've just kept going. It's not really the case that
they're post processors for mcfg, or at least ,not specifically for mcfg
(I would've done things quite differently if that were my goal)... its
more like this
create
|
|
v
authorise <-----------> <standard profile> <---------------> validate
|
|
v
deploy
As such, mcfg belongs to the create category, and its the only entry
point (at the moment), but its not intended as any kind of canonical
choice. It's just a simple minimal choice that does some important
things right, and does not do a lot of other things. grcfg (the command
line query tool) and grplace (templater) are in the deploy bracket,
although they're more like parts of a deployment engine than the whole
thing; I wrote them mainly out of desperation(!) not out of some great
desire to see them become canonical choices either. That doesn't mean I
don't think I did a good job in some respects, it just means that I
wasn't setting out to create anything definitive, more to increase the
options available.
Ah; so all of the other tools operate on the compiled configurations,
not on the source files?
I could certainly see wanting the templating and validation to operate
at that level (and I can already see how to write a validator for the
mcfg/Puppet linkage). I wouldn't expect these compiled configurations
to ever be modified other than automatically (e.g., via template
interpolation), so I have a hader time seeing why the authorization tool
operates on the compiled configurations; I haven't really looked into
that tool, though, so that might explain it.
mcfg is minimal; it does minimal things, I don't plan to extend it
beyond fixing existing functionality, and just maybe adding one more
list constraint thats a straightforward extension of the two that are
there. I wanted to make a baseline; something more like ed than emacs.
But if creating mcfg and some associated tools means theres a real
target, I'm prepared to go and create something bigger. I have language
specs for much bigger, more powerful languages. They take time though,
and they take enough commitment that I want them to have some potential
use if I'm going to go there. Before I started, I think it could be
argued that it wasn't clear that there would ever be a potential use.
While I've now been repeated castigated for "advertisement" and
"self-promotion", the only way to get your tool used is to get out there
and talk about it. I'd love to see you spend more time on these tools
and develop a workflow around using them, but I can't commit the time to
advertising their use, since adoption of Puppet is how I'm currently
making my living.
I will promise that if we can come up with a clean workflow then I will
write about connecting Puppet to mcfg (are you going to stick with that
name?) and continue experimenting with it, but I can't promise to work
as hard on getting its name out there as I do on Puppet's, until such a
time as mcfg's server-side tools are better than Puppet's.
I haven't tried this; dependency information is interesting, and leads
on naturally to the idea of generating workflows both within machines
and (much harder) between machines. cfgw is something that if it ever
got fast enough could do this with a little extension; w started off as
"workflow" but I quickly realised that validation was as much as I was
going to manage on my first try.
I agree on the order here; Puppet already handles some amount of
intra-host workflow, where dependencies are used to do a topological
sort of all elements being managed so that dependencies are applied
before the dependents, and the dependents can react to changes in
dependencies. It's not much, but it works.
I've got some ideas for how to handle inter-host relationships, but the
same mechanism would no longer work for both ordering (yes, operations
should be ordered, but based on dependencies rather than file order --
you actually do have to create that user before you can chown a file to
it) and reacting to changes. I'm planning on creating a simple
inter-host event system, which I hope will be the agent of reacting to
changes on remote systems, but I haven't had time to spend on that.
I have no idea how to handle real workflow issues, like "upgrade no more
than X hosts at a time in this group" or "upgrade the database server
and then the web server". I expect these will be tools that sit higher
than Puppet's language, but I don't know. I've been calling these
"change management" tools, and I think they're in the layer above
configuration management. My current goal with Puppet is to make
configuration management so easy that change management is now your
biggest problem.
--
To define recursion, we must first define recursion.
---------------------------------------------------------------------
Luke Kanies | http://reductivelabs.com | http://madstop.com
_______________________________________________
lssconf-discuss mailing list
lssconf-discuss@inf.ed.ac.uk
http://lists.inf.ed.ac.uk/mailman/listinfo/lssconf-discuss