Hi,

I had the opportunity to work again on this patch to add the so-called
puppet device app and cisco facts.

It's not yet ready, so I won't send the new patches to the list (they
should be available in my github branch of course) today.

On 18/01/11 21:19, Luke Kanies wrote:
> On Jan 18, 2011, at 4:15 AM, Brice Figureau wrote:
> 
>> On Mon, 2011-01-17 at 21:57 -0800, Luke Kanies wrote:
>>> On Jan 14, 2011, at 1:31 AM, Brice Figureau wrote:
>>> 
>>>> Hi,
>>>> 
>>>> It looks like I forgot to answer this e-mail :(
>>>> 
>>>> On Fri, 2011-01-07 at 13:04 -0800, Luke Kanies wrote:
>>>>> On Jan 7, 2011, at 11:44 AM, Brice Figureau wrote:
>>>>> 
>>>>>> On 07/01/11 19:13, Luke Kanies wrote:
>>>>> [...]
>>>>>>> Hi Brice,
>>>>>>> 
>>>>>>> This is great work, and it's work I've been excited to
>>>>>>> see for a long time.
>>>>>>> 
>>>>>>> One thing that's not clear to me from this -- would you
>>>>>>> ever expect to have a device's resources mixed in with a
>>>>>>> normal host's resources?
>>>>>> 
>>>>>> Yes that's correct, because that was the simplest way to
>>>>>> bring the feature.
>>>>>> 
>>>>>>> That is, would you expect to see a given host's catalog
>>>>>>> containing anything other than device resources for a
>>>>>>> single device?  It looks like I could use this to have
>>>>>>> vlan resources configured on multiple devices, plus
>>>>>>> normal Unix resources in a completely unrelated host.
>>>>>> 
>>>>>> You're right. They're just normal resources except they're
>>>>>> remote. In my view you'd just design one of your node close
>>>>>> to the remote device to manage it.
>>>>>> 
>>>>>>> We've been talking about this a good bit internally, and
>>>>>>> one of the things I want to do with this kind of work
>>>>>>> (managing systems that can't run Puppet agents) is retain
>>>>>>> the idea that a given catalog is only ever for one
>>>>>>> device.  This could be done right now using puppetd and
>>>>>>> just setting certname specially (and maybe
>>>>>>> configdir/vardir generally), so it's more of a conceptual
>>>>>>> discipline at this point.
>>>>>> 
>>>>>> Yes, that's right, this feature can be emulated with a
>>>>>> given certname.
>>>>>> 
>>>>>>> In this model, you'd either have a single resource that
>>>>>>> defines how to connect with the device:
>>>>>>> 
>>>>>>> device { $hostname: user => ..., pass => ..., }
>>>>>>> 
>>>>>>> Or you'd create a special application that does this:
>>>>>>> 
>>>>>>> $ puppet device --proto ssh --user <user> --pass <pass> 
>>>>>>> cisco2960.domain.com
>>>>>>> 
>>>>>>> Or something like that.
>>>>>> 
>>>>>> I see where you're going :)
>>>>>> 
>>>>>>> Do you think this kind of model makes sense?  Is this
>>>>>>> basically how you were planning on using this work?
>>>>>> 
>>>>>> This certainly make sense, but frankly I didn't think about
>>>>>> it. I was more concerned about the remote management than
>>>>>> on the user interface of the given feature.
>>>>>> 
>>>>>> The problem with having a specific device daemon/app is
>>>>>> that you'll then have to run one daemon per managed device.
>>>>>> This is certainly good for reporting or such features, but
>>>>>> would be awful in terms of resource consumption. We could
>>>>>> do this in one puppetd run by asking for more than one
>>>>>> catalog (one for every "device" and one for the given
>>>>>> node).
>>>>> 
>>>>> It doesn't have to be one process per device - you could have
>>>>> 'puppet device' (or whatever -- maybe 'proxy'?) take a list
>>>>> of names, or take a search string to use against Dashboard,
>>>>> or just about anything. Heck, it could be a long-running
>>>>> process that had a schedule for each host, and either forked
>>>>> off or used a thread for each host.
>>>> 
>>>> Don't you think it would be a clone of the puppet agent (in
>>>> terms of code)?
>>> 
>>> It would be similar except:
>>> 
>>> * The process should be able to reasonably handle N devices,
>>> where N
>>>>> 1.
>> 
>> Of course. But that could essentially be just a loop around the 
>> configurer, driven by a configuration file (ie
>> /etc/puppet/device.conf) (and see below for a more complete
>> description of this idea).
> 
> Yep.  I don't by any means think this executable should be very
> complicated.

It isn't for the moment.

>>> * The process should never, ever affect local system state (and
>>> thus could run on a central server)
>> 
>> That's correct, and that's not what the current version of this
>> patch does (which is wrong).
>> 
>>> * You could do more interesting things with the providers and
>>> such if you knew that all resources were only for that one kind
>>> of device and device instance
>> 
>> Here's how I see the puppet device system:
>> 
>> 1) We have a (central) device configuration file. This should help
>> us map certnames to device urls (this could later on be stored
>> elsewhere, like the dashboard or anything else).
>> 
>> The format could be:
>> 
>> [device.domain.com] type = cisco url =
>> ssh://user:[email protected]/?enable=enablepassword
>> 
>> ...other devices...
>> 
>> I'm mentioning "type" here, but this could also be stored in the 
>> manifests by using explicitely a provider.
> 
> Would it be reasonable to have this data in an external node
> classifier, with some kind of search interface?  Then the device
> process just iterates over all devices, or something similar?

To start simple, I stucked with the /etc/puppet/device.conf file I
mentioned earlier in the thread.
I'll do another pass when the puppet device will be more mature to add
an ENC system, a daemon mode, etc...
I want to have the base system running first :)

>> 2) Puppet device is responsible for parsing this file, and for
>> each device: * use the device certname as our current $certname *
>> switch $vardir (and other essential state directory settings) to 
>> $vardir/devices/$certname, creating those if they don't exist at
>> the same time. * it should set the facts terminus to :device which
>> would route the configurer fact request to the correct terminus for
>> the given device type/model. * call the configurer to fetch a
>> catalog and apply it to the remote device. Since $vardir is
>> overriden, all the state goes to a different directory. * clean-up
>> the mess we did with overriding $vardir * wash, rinse, repeat for
>> the next device.
>> 
>> This should partially isolate the current host from the device
>> system (at least the puppet internal stuff).
> 
> Yep, this seems like a good fit.

This part is implemented.

>> Now, I don't really see how we can achieve a perfect isolation.
>> Nothing can prevent someone to add a destructive file{} resource to
>> the device catalog, which unfortunately will be applied as is. Or
>> maybe we can use some kind of catalog filtering and allow only the
>> network device types...
> 
> I can see a couple of ways.
> 
> The "simplest" is that you have a post-compile hook that just makes
> sure only device-specific resources are in the catalog.  It's simple,
> but not terribly supportable in the long term.  This would probably
> be easiest by validating the provider, rather than the type itself,
> since that's what's modifying the system.

But this process happens on the master, correct?
The problem is that the master doesn't know it talks to a puppet device.

> The better solution in the long term, IMO, is to have a
> constraint-based system for resource types and/or providers, so you
> could reasonably restrict a catalog to only having types that
> function on, say, devices, rather than POSIX OSes.  I don't really
> know how this would work, but at the least, it would involve every
> provider list its constraints and/or features, and then a given
> catalog could be restricted based on them.

Yes, that's appealing. We could leverage the confine system we have on
the providers to do that.

> I don't think it'd be quite as simple as trusting existing provider
> constraints, because you'd actually want some way to guarantee that
> the host machine isn't being modified, but it might be.

That's correct.

>> This (even partial) isolation has the benefits of bringin proper
>> device facts, and reports/catalogs will be correctly fetched/pushed
>> for the given device certname.
>> 
>> In my view puppet device would not be a daemon, but that should be 
>> something that can be added. I don't exactly remember how the agent
>> is used, I'll have to check that, though.
>> 
>> Does it match what you were thinking?
> 
> Yes, exactly.
> 
>> There is still something that I find clumsy or not following the
>> "puppet way". Until now every type was able to autodiscover the
>> correct provider (or can be forced to a given provider). We'll
>> currently lose this model, if we specify the device type in the
>> config file.
> 
> Yeah.  I think we will need to specify enough information that we can
> figure out how to discover the facts about the host we're
> configuring, but the rest should be autodiscovered as much as
> possible based on those facts.  Given that we do have to connect from
> an external host, we're fundamentally limited in some ways.

One thing that will be difficult is to auto-detect the kind of device
we're connecting to (especially because almost every device has a
different authentication prompt when using telnet).
For the moment I took the easy way and let the user specify the device
type in the config file, but ulitmately that'd be great to auto-discover
the correct model we're connecting to...

-- 
Brice Figureau
My Blog: http://www.masterzen.fr/

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Developers" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/puppet-dev?hl=en.

Reply via email to