Re: [lssconf-discuss] Theory vs Practice

2006-10-12 Thread Narayan Desai
 Luke == Luke Kanies [EMAIL PROTECTED] writes:

  Luke The problem is that there appears to be a split whether we
  Luke want it or not.  The last few workshops have been very
  Luke frustrating for me, because they haven't really even tried to
  Luke address how a sysadmin would take advantage of what we talked
  Luke about.  Tools are cool and great, but how do they make
  Luke people's lives better?

  Luke I'm in the midst of building a large framework (hell, it's 
  Luke 50k lines with test code) and a community around that
  Luke framework.  How can the workshop help me make a better tool?
  Luke How can the workshop help my users more effectively use
  Luke Puppet?  How can the workshop help potential users choose
  Luke between the different tools?

Perhaps I was wrong, maybe there are three classes of attendees:
researchers, tool developers, and users. I am not sure if I actually
believe this is a good split. I agree strongly with Paul; being
current in the research is critical for anyone who is going to write
tools today that won't be discarded before tomorrow. 

  Luke And when I say better tool, I mean today, this week, next
  Luke month -- not some day, not three years from now.  As a
  Luke developer, I'm looking for great features to add, like bcfg2's
  Luke reporting or LCFG's spanning maps, and how those features will
  Luke affect my users.

I guess that I come down in the middle on this one as well; those
features aren't just features; they are research and theory put into
practice.  

  Luke I've tried to talk about the theory behind Puppet, and it gets
  Luke a very poor reception, if not sometimes outright hostile.  I
  Luke am pretty set in how Puppet works, and I'm in the
  Luke put-up-or-shut-up phase.  No one seems particularly interested
  Luke in the fact that I'm trying to create an operating system
  Luke abstraction layer, so the best I can do is create what I think
  Luke is a good tool and see if others are interested in using it.
  Luke I think it's theoretical suicide for any automation tool *not*
  Luke to either create or use an operating system abstraction layer,
  Luke but I'm apparently the only one who thinks that.

I have two points here. I think that you are conflating theory and
design here. For example, (a little closer to home), I could say that
an aspect of the design used by bcfg2 is similar to the formalization
of closures theory work that Alva has done. I think they are two
different things, and this is important distinction.

On the second point, I just don't think anyone is all that interested
in the OS abstraction layer; everyone deals with this issue somehow;
Paul uses sxprof constructs and plugins in LCFG, we use our group
structure and client code to do this in Bcfg2, etc.

  Luke I'm sure there are critical theoretical flaws in Puppet, but
  Luke they don't seem to be affecting me or my users.  What is
  Luke affecting us is that I'm still doing 98% of the development
  Luke and I have a relatively small user base.  What I want to know
  Luke as a tool developer is how to fix that, and as an
  Luke experimentalist I need the same answers so that the experiment
  Luke can be carried out with a more complete framework and a wider
  Luke set of test subjects.

This is only one aspect of being in line with current theory. I made a
big design mistake in bcfg, theory predicted it early, but I ignored
it until it became a big problem. When it happened, it became a fire
to put out. I learned my lesson there. 

The other thing I am trying to keep in mind is building bcfg2 in such
a way that i can plug in advanced infrastructure as it becomes
available. This isn't easy much of the time, but I think that it is
goal in the long term. 

  Luke The config-mgmt BoF has averaged more than 70 people every
  Luke year since I've been running it; you just can't get the same
  Luke intimate discussion in that kind of environment.  And having
  Luke the full 8 hours is very useful.

I agree. I think that the topical area for the practice workshop will
be dramatically different from the current config workshop. I would
only expect it to siphon off people that are not interested in the
research topics. I expect many people to attend both. 
 -nld
___
lssconf-discuss mailing list
lssconf-discuss@inf.ed.ac.uk
http://lists.inf.ed.ac.uk/mailman/listinfo/lssconf-discuss


Re: [lssconf-discuss] Theory vs Practice

2006-10-12 Thread Narayan Desai
 Luke == Luke Kanies [EMAIL PROTECTED] writes:

  Luke If you integrate it with the configuration generator, then
  Luke you've got to have a tight semantic bond between the validator
  Luke and the generator (i.e., it's not enough that the box be a
  Luke mail server, it must specifically listen for smtp requests on
  Luke the port we plan on using); this means that the generator has
  Luke to have clear semantics here and then have hooks for some
  Luke other tool to use them.

  Luke Yes, you could specifically add this functionality to a given
  Luke tool, but could you create it as a generic component that
  Luke could be added to any tool?  Could you see a single validator
  Luke that could work with Puppet, cfengine, and BCFG2?

  Luke I expect Puppet's semantics aren't clear enough right now that
  Luke you could do this, although I don't know much about the
  Luke validation research, so I could easily be wrong.

This was the point of the paper Paul and Ed did last year. The way to
go is to agree on an intermediate format that several tools can
consume in an opaque fashion. The linkage into a given tool is tool
specific, but the constraint compiler, or whatever can just output a
single format. 

For what it is worth, I think that I finally have a good place to plug
this interface in. Does anyone have any higher-level tools they want
to experiment with?
 -nld
___
lssconf-discuss mailing list
lssconf-discuss@inf.ed.ac.uk
http://lists.inf.ed.ac.uk/mailman/listinfo/lssconf-discuss


Re: [lssconf-discuss] Theory vs Practice

2006-10-12 Thread Luke Kanies

Narayan Desai wrote:

Luke == Luke Kanies [EMAIL PROTECTED] writes:


  Luke The problem is that there appears to be a split whether we
  Luke want it or not.  The last few workshops have been very
  Luke frustrating for me, because they haven't really even tried to
  Luke address how a sysadmin would take advantage of what we talked
  Luke about.  Tools are cool and great, but how do they make
  Luke people's lives better?

  Luke I'm in the midst of building a large framework (hell, it's 
  Luke 50k lines with test code) and a community around that
  Luke framework.  How can the workshop help me make a better tool?
  Luke How can the workshop help my users more effectively use
  Luke Puppet?  How can the workshop help potential users choose
  Luke between the different tools?

Perhaps I was wrong, maybe there are three classes of attendees:
researchers, tool developers, and users. I am not sure if I actually
believe this is a good split. I agree strongly with Paul; being
current in the research is critical for anyone who is going to write
tools today that won't be discarded before tomorrow. 


I would always expect the tool developers to be responsible for merging 
the research and practice; I don't think we'd get much benefit from just 
having tool developers talk to each other about their tools.


I'm not trying to be down on research, but I often feel like Config-mgmt 
research is in the string theory realm where we can talk for years but 
don't carry out any experiments.  Good research is founded in good data, 
and there's precious little data in the config-mgmt world.  And for the 
record, I'm not the only one who feels this way -- after every workshop, 
I have a few people come up and complain that they sat through the same 
workshop the year before.


I've got what I think is a good way to get data, or something like it, 
and I'm full-up on research until I make a bit of progress.  I have a 
theory, I'm testing it, and I'm trying to make a living at the same time 
(the fact that I'm trying to survive by developing Puppet surely has an 
impact on how I see all of this).


If I come across research that affects my ability to test my theory, I 
want to integrate it; if I come across data that either supports or 
disproves my theory, then I clearly need to know about it; but there are 
many areas of research that are effectively orthogonal to my experiment, 
and I just can't be interested in staying up on all of those, just like 
my wife (who's a cancer researcher) has only a passing interest in areas 
other that cell-cycle research.



  Luke And when I say better tool, I mean today, this week, next
  Luke month -- not some day, not three years from now.  As a
  Luke developer, I'm looking for great features to add, like bcfg2's
  Luke reporting or LCFG's spanning maps, and how those features will
  Luke affect my users.

I guess that I come down in the middle on this one as well; those
features aren't just features; they are research and theory put into
practice.  


Clearly some features are just features, and some features are 
theory-backed.  If they don't become features at some point, though, I 
don't have a lot of use for them, so I'm more interested in the features 
than the theory, at least as it pertains to my day-to-day job right now.



  Luke I've tried to talk about the theory behind Puppet, and it gets
  Luke a very poor reception, if not sometimes outright hostile.  I
  Luke am pretty set in how Puppet works, and I'm in the
  Luke put-up-or-shut-up phase.  No one seems particularly interested
  Luke in the fact that I'm trying to create an operating system
  Luke abstraction layer, so the best I can do is create what I think
  Luke is a good tool and see if others are interested in using it.
  Luke I think it's theoretical suicide for any automation tool *not*
  Luke to either create or use an operating system abstraction layer,
  Luke but I'm apparently the only one who thinks that.

I have two points here. I think that you are conflating theory and
design here. For example, (a little closer to home), I could say that
an aspect of the design used by bcfg2 is similar to the formalization
of closures theory work that Alva has done. I think they are two
different things, and this is important distinction.


I could certainly be conflating the two; if that's the case then I 
probably have no theory and only a design, which seems somewhat unlikely 
but would be tolerable to me.  At the least, I expect Puppet to be 
fertile ground for research if someone were so inclined, because it 
would be a clean line between the interesting stuff above (compiling 
configurations, for instance) and the boring crap below (what's the 
order of arguments to crontab on linux vs. Solaris?), and I certainly 
hope that other people can take advantage of Puppet agents and its 
library to avoid reinventing the wheel.



On the second point, I just don't think anyone is all that interested
in the OS 

Re: [lssconf-discuss] Theory vs Practice

2006-10-12 Thread Luke Kanies

Alva Couch wrote:

Luke Kanies wrote:

Yes, you could specifically add this functionality to a given tool, 
but could you create it as a generic component that could be added to 
any tool?  Could you see a single validator that could work with 
Puppet, cfengine, and BCFG2?


You assume that it must be integrated. There is a lot of value, however,
in an out-of-band validator that is not integrated. For one thing, it
tests the configuration tool itself. For another thing, it has an easy
path to adoption. Third, it is easy to write such a validator in small,
orthogonal pieces that don't have to talk to one another. In other
words, the component composition problem (the subject of my student
Yizhan Sun's thesis) goes away, and is replaced with the simpler problem
of comprehensiveness.


I would hope the validator would operate as a separate component, but it 
would at least have to validate at the same level of abstraction as the 
tool being validated.  A validator for both Puppet and cfengine would be 
very difficult, because cfengine is so file-based and Puppet largely 
allows you to ignore file contents, or at least treat them at a 
higher-level.



You're assuming that the validator would check everything. That is not
the role of a validator. It instead checks what it can and reports on
strange situations.


That is true; I had not thought of that.


The big issue here is how can we cooperate toward better tools. You
have lamented that you're the only one doing Puppet. For you, a good
workshop would provide you with some help. But that hasn't happened for
a number of reasons, notably, that attendees' visions differ even on how
the problem of configuration management should be approached.


Actually, I do not at all expect anything at LISA to result in more 
people working on Puppet; not many sysadmins are also developers, and 
few of those who are have any extra time.  At LISA, I expect to be able 
to work closely with high-level practitioners to help them understand 
how they can use tools like Puppet (yes, any tool will do -- I'd *much* 
rather someone used Bcfg2 or LCFG than nothing at all), and to help me 
understand how I can create a better tool for them.


Tom's presentation last year was golden to me -- he's a widely 
acknowledge expert in the field, training the next generation of 
sysadmins, and he stood in front of us and did a great job of justifying 
why he's rejected automation.  How can we answer Tom?  That has haunted 
me all year; I think some aspects of Puppet do answer him, but not 
nearly enough.


However, if I can make a better tool, and I can get more people using 
automation tools, then that increases the overall market, which will 
naturally draw a few more people to using and developing Puppet.


And you yourself have indicated that you are not flexible on the 
implementation details. So the potential contributor has little to 
motivate the contribution, other than altruism. As a theoretical

observation, it helps people to justify their involvement with a
project if they have some personal stake in the outcome.


Heh, well; there is an aspect of the implementation on which I am not 
flexible, but that aspect essentially defines what Puppet is.  It's true 
that I would reject contributions that broke the abstraction layer, just 
like I would reject contributions that coupled layers unnecessarily (in 
fact, there have been many recommendations that would have resulted in 
this coupling and which I have denied).  However, every project has its 
sticking points, and Puppet's abstraction layer is its main sticking point.


On the other side, Puppet is amazingly easy to develop for; there are 
tons of little pieces that can be easily added, and you can use those 
pieces immediately without having to fork the code or modify the core or 
anything.  I've been pleasantly surprised with what people have done; 
apparently Puppet is now a core part of Red Hat's stateless Linux 
project, and someone from Red Hat has written Puppet types that model 
all of the LVM filesystem stuff, meaning you can use Puppet to create 
volumes and filesystems.



I *was* planning on preparing a presentation on what we can learn from
service architectures but I am leery of doing that now, given the
current thread of discussion. It would seem that anything I can possibly
say is theory and therefore not of much interest. I have little time
and no wish to force myself upon people. My aim is only to serve, and if
keeping quiet about the forces gathering that promise to transform our
discipline in the next few years is service, then I am willing to
perform that service. :)


This is actually *exactly* what I want to hear -- how can we apply 
knowledge from other disciplines so we don't have to make the same 
mistakes again?  I wish sysadmins would take more time to copy other 
fields, instead of always assuming we're special.



And, to be blunt, I have as little interest in learning how I personally
can contribute 

Re: [lssconf-discuss] Theory vs Practice

2006-10-12 Thread Daniel Hagerty
  Again, I agree no one cares about the abstraction layer, but I am
  flabbergasted that this is the case.  I assume no one cared about
  portable languages when C and libC were developed either; I know I don't
  have the smarts of Kernighan et al, but I'll keep pushing until I fail
  or someone smarter takes over, I guess.

Poor assumption.

The first fortran compiler was delivered in 1957.  Lisp was
specified in 1958, as was the first version of algol.

C was a pun on B, which was a distillation of BCPL.  The family
owes much to algol (as does most any programming language you touch).


The early questions of using high level languages revolved more
around which applications they were suitable for, and when the
tradeoffs of relatively poor code generation were worth it.  The
notion of using a high level language for the bulk of an operating
system was somewhat radical in 1973, but had been done as early as
1961.

While we can lament the speed of adoption, its worth considering
the 20+ years it took for high level languages to generally be
considered worth it.
___
lssconf-discuss mailing list
lssconf-discuss@inf.ed.ac.uk
http://lists.inf.ed.ac.uk/mailman/listinfo/lssconf-discuss


Re: [lssconf-discuss] Theory vs Practice - TAL's presentation?

2006-10-12 Thread Narayan Desai
 Brandon == Brandon S Allbery KF8NH [EMAIL PROTECTED] writes:

  Brandon Luke recently mentioned a presentation by Tom Limoncelli
  Brandon about why he doesn't do automated configuration management;
  Brandon does anyone have a pointer to this, or a summary or etc.?
  Brandon I'm still coming up to speed on a lot of this stuff (and
  Brandon noticing that the currently existing tools don't in general
  Brandon seem to fit our needs very well... but then neither does
  Brandon what we're currently using :/ ).

Tom and I gave similar (though from different perspectives) talks
about lack of tool traction and administration procedures last
year at the config workshop. I think it likely that Paul has posted
slides someplace.
 -nld
___
lssconf-discuss mailing list
lssconf-discuss@inf.ed.ac.uk
http://lists.inf.ed.ac.uk/mailman/listinfo/lssconf-discuss