Have you tried putting quotes around "dev.mycompany.com" ?

On May 19, 2009, at 11:57 AM, David Bishop wrote:

>
> Okay, that was a fairly general question, this is more specific.   
> While
> prototyping each of these approaches, I've come across an oddity.  I'm
> trying to use a selector to determine which cluster I'm running on,  
> and
> chose the file path based on that (again, not for production, just to
> make sure this works).
>
> class spong {
>
>        file { "spong.conf":
>                path  => "/tmp/spong.conf",
>                source  => $domain ? {
>                        dev.mycompany.com => "puppet:///spong/ 
> spong.conf.dev",
>                        default => "puppet:///spong/spong.conf",
>                },
>                owner => "root",
>                group => "root",
>        }
> }
>
> Running facter on the machine, it comes back with (among other things,
> domain => dev.mycompany.com
>
> Yet, when running puppet, I get this:
>
> May 19 14:45:16 puppetmaster puppetmasterd[29837]: Could not parse  
> for environment production: Syntax error at '.'; expected '}' at / 
> etc/puppet/modules/spong/manifests/init.pp:7
> May 19 14:45:16 puppetmaster puppetmasterd[29837]: Could not parse  
> for environment production: Syntax error at '.'; expected '}' at / 
> etc/puppet/modules/spong/manifests/init.pp:7
>
> The docs at
> http://reductivelabs.com/trac/puppet/wiki/LanguageTutorial#selectors
> reference using the $operatingsystem fact with barewords sunos and  
> redhat.
> What am I doing wrong?
>
> David
>
> On Tue, May 19, 2009 at 09:45:34AM -0600, David Bishop wrote:
>> I've been tasked with revamping our existing puppet configs, to make
>> them more manageable/extensible/etc.  We have four(ish) groups of
>> machines, that all need similar configs, with slight tweaks -  
>> depending
>> on which network they're on, etc.  Currently, we have a very "deep"
>> inheritance tree, such that a node will have something like this:
>>
>> base -> diamond_cluster -> diamond_non_admin -> diamond_web_server
>>
>> and traditionally, if there was a need to push a file to most, but  
>> not
>> all, webservers, then another layer would be added as such:
>>
>> base -> diamond_cluster -> diamond_non_admin -> diamond_web_server  
>> -|-> diamond_web_with_apache2
>>                                                                     
>> |-> diamond_web_with_apache1
>>
>> and the nodes would all be shuffled around to inherit one of the  
>> two new
>> classes.  This is... unwieldy.  To put it mildly. Among other  
>> things, we
>> were duplicating anything that should be applied to all webservers  
>> (for
>> example), as there is no class that all webservers belong to.
>>
>> So, that's the problem. I've had several ideas for a solution, but  
>> would
>> love input from people that have been down this road.  Also, I've  
>> used
>> puppet in the past at other jobs, but never on this scale (over 200
>> machines) or this complexity.  And finally, I'm new to this job,  
>> and so
>> I'm still feeling out how many of these differences are valid and how
>> many are holdovers from "the way it's always been", and should just  
>> be
>> fixed.
>>
>> Option #1, includes + case statements
>>
>> In this scenario, we have a very short but broad inheritance tree:
>>
>> baseline -|-> diamond_cluster -> node
>>          |-> opal_cluster -> node
>>          |-> ruby_cluster -> node
>>          |-> admin_cluster -> node
>>
>> Then, each *_cluster includes the basic configs for a node on its
>> network, and the nodes include as many server-type specific modules  
>> as
>> they need to.  The modules have logic in them to look for the tag
>> associated with the cluster the node inherited, to split out anything
>> specific they need to do.  That sounds confusing, so here's an  
>> example:
>>
>> web10.diamond inherits diamond_cluster
>> web10.diamond includes apache2, zenoss_client, and resolv_conf
>>
>> The definitions for apache2 and zenoss_client don't have any special
>> cases that depend on the node's cluster, but resolv_conf notices that
>> the node is a "diamond" and puts in the copy of resolv.conf that adds
>> diamond.company.com to the search order (i.e., has some sort of case
>> statement with a case that matches domain == diamond).
>>
>> That is both simplistic and the specific example (resolv.conf) could
>> probably be handled better another way, but it shows the type of  
>> problem
>> we're trying to address.
>>
>> Option #2 is similar to Option #1, but using some sort of logic in
>> templates to achieve the same result.  That is, take the same  
>> scenario
>> but instead of resolv_conf having logic in the module, we would  
>> have a
>> templated resolv.conf that checks the domain of the node it's being
>> applied to (via tags? or something? I haven't used templates, so I'm
>> fuzzy on the specifics) and changes the search line based on that.
>>
>> Option #3 I'm *really* fuzzy on.  I've read over
>> http://reductivelabs.com/trac/puppet/wiki/Recipes/HandlingDisparateDefinesWithClasses
>> several times, and I'm pretty sure that there is a way to use Paul's
>> method, but I can't quite pin it down.
>>
>> So, sorry for the length.  And I'm probably heading 180 degrees in  
>> the
>> wrong direction. Feel free to tell me I'm dumb, and there is a much
>> better way to do this.  I'm hard to offend, and I would love to do  
>> this
>> the Right Way. Currently, just adding a file to be puppetized is an
>> exercise in frustration, as you try and track down from all 40+ class
>> definitions, which ones cover all the machines that should have the
>> file, and no more.  And having to redo the inheritance structure for
>> one-offs is just, well, non-optimal.
>>
>> David
>
>


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to