Luke Kanies wrote:
Yes, you could specifically add this functionality to a given tool, but
could you create it as a generic component that could be added to any
tool? Could you see a single validator that could work with Puppet,
cfengine, and BCFG2?
You assume that it must be integrated. There is a lot of value, however,
in an out-of-band validator that is not integrated. For one thing, it
tests the configuration tool itself. For another thing, it has an easy
path to adoption. Third, it is easy to write such a validator in small,
orthogonal pieces that don't have to talk to one another. In other
words, the component composition problem (the subject of my student
Yizhan Sun's thesis) goes away, and is replaced with the simpler problem
If you check the system state itself, then you have to deal with
heterogeneity -- am I using Postfix, exim, or sendmail? Where's the
config file? What format is it? Is the smtp server set in multiple
places (e.g., a java app somewhere)? This is even worse, in my opinion,
because your tool is doomed to only have limited coverage and it will
never work for custom apps.
You're assuming that the validator would check "everything". That is not
the role of a validator. It instead checks what it can and reports on
The big issue here is "how can we cooperate toward better tools". You
have lamented that you're the only one doing Puppet. For you, a "good"
workshop would provide you with some help. But that hasn't happened for
a number of reasons, notably, that attendees' visions differ even on how
the problem of configuration management should be approached.
And you yourself have indicated that you are not flexible on the
implementation details. So the potential contributor has little to
motivate the contribution, other than altruism. As a "theoretical"
observation, it helps people to justify their involvement with a
project if they have some personal stake in the outcome.
I *was* planning on preparing a presentation on "what we can learn from
service architectures" but I am leery of doing that now, given the
current thread of discussion. It would seem that anything I can possibly
say is "theory" and therefore not of much interest. I have little time
and no wish to force myself upon people. My aim is only to serve, and if
keeping quiet about the forces gathering that promise to transform our
discipline in the next few years is "service", then I am willing to
perform that "service". :)
And, to be blunt, I have as little interest in learning how I personally
can contribute to Puppet as others seem to have in the more theoretical
aspects of our discipline. It is simply not cost-effective for me to
contribute; I don't envision enough of a personal return; and that was
the point of my last year's paper on cost analysis. The reason people
don't do as we do is that it is not cost-effective for them to do so.
But how familiar is this story? I think it is the rule rather than the
Frankly, once the features of a tool are discussed, what is there left
to discuss but the future? Where are the case studies? Who should we
invite who can lend "practical" insight? What data should we study?
I'm all ears...
Dr. Alva L. Couch
Associate Professor of Computer Science
Associate Professor of Electrical and Computer Engineering
Tufts University, 161 College Avenue, Medford, MA 02155
Phone: +1 (617) 627-3674
lssconf-discuss mailing list